Test Report: Docker_Linux_crio_arm64 21139

                    
                      acfd8b7155af18aff79ff1a575a474dfb6fd930f:2025-10-09:41835
                    
                

Test fail (39/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.63
35 TestAddons/parallel/Registry 15.73
36 TestAddons/parallel/RegistryCreds 0.5
37 TestAddons/parallel/Ingress 145.91
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.49
41 TestAddons/parallel/CSI 43.08
42 TestAddons/parallel/Headlamp 3.26
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 9.44
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 6.28
52 TestForceSystemdFlag 518.13
53 TestForceSystemdEnv 510.91
98 TestFunctional/parallel/ServiceCmdConnect 603.52
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
129 TestFunctional/parallel/ServiceCmd/DeployApp 600.83
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
148 TestFunctional/parallel/ServiceCmd/Format 0.4
149 TestFunctional/parallel/ServiceCmd/URL 0.4
191 TestJSONOutput/pause/Command 2.53
197 TestJSONOutput/unpause/Command 2.43
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 2.59
282 TestPause/serial/Pause 6.41
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.92
307 TestStartStop/group/old-k8s-version/serial/Pause 6.46
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.07
318 TestStartStop/group/no-preload/serial/Pause 8.02
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.43
328 TestStartStop/group/embed-certs/serial/Pause 6.57
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.21
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.41
345 TestStartStop/group/newest-cni/serial/Pause 6.47
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.07
x
+
TestAddons/serial/Volcano (0.63s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable volcano --alsologtostderr -v=1: exit status 11 (630.134145ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:31:12.988247  293042 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:31:12.989675  293042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:12.989695  293042 out.go:374] Setting ErrFile to fd 2...
	I1009 18:31:12.989702  293042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:12.989999  293042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:31:12.990351  293042 mustload.go:65] Loading cluster: addons-419518
	I1009 18:31:12.990748  293042 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:12.990768  293042 addons.go:606] checking whether the cluster is paused
	I1009 18:31:12.990875  293042 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:12.990897  293042 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:31:12.991426  293042 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:31:13.011316  293042 ssh_runner.go:195] Run: systemctl --version
	I1009 18:31:13.011393  293042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:31:13.033956  293042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:31:13.136970  293042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:31:13.137079  293042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:31:13.166954  293042 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:31:13.166976  293042 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:31:13.166981  293042 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:31:13.166985  293042 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:31:13.166989  293042 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:31:13.166992  293042 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:31:13.166997  293042 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:31:13.167000  293042 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:31:13.167003  293042 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:31:13.167009  293042 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:31:13.167012  293042 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:31:13.167016  293042 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:31:13.167019  293042 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:31:13.167023  293042 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:31:13.167028  293042 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:31:13.167037  293042 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:31:13.167040  293042 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:31:13.167045  293042 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:31:13.167048  293042 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:31:13.167051  293042 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:31:13.167057  293042 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:31:13.167062  293042 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:31:13.167065  293042 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:31:13.167068  293042 cri.go:89] found id: ""
	I1009 18:31:13.167135  293042 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:31:13.182619  293042 out.go:203] 
	W1009 18:31:13.185555  293042 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:31:13.185584  293042 out.go:285] * 
	* 
	W1009 18:31:13.534184  293042 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:31:13.537034  293042 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.63s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.613599ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002843466s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004163499s
addons_test.go:392: (dbg) Run:  kubectl --context addons-419518 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-419518 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-419518 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.157399045s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable registry --alsologtostderr -v=1: exit status 11 (288.816994ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:31:38.346568  293991 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:31:38.347841  293991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:38.347917  293991 out.go:374] Setting ErrFile to fd 2...
	I1009 18:31:38.347939  293991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:38.349081  293991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:31:38.352398  293991 mustload.go:65] Loading cluster: addons-419518
	I1009 18:31:38.352875  293991 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:38.352920  293991 addons.go:606] checking whether the cluster is paused
	I1009 18:31:38.353068  293991 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:38.353108  293991 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:31:38.353744  293991 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:31:38.379213  293991 ssh_runner.go:195] Run: systemctl --version
	I1009 18:31:38.379280  293991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:31:38.396667  293991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:31:38.497740  293991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:31:38.497861  293991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:31:38.531759  293991 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:31:38.531782  293991 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:31:38.531788  293991 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:31:38.531793  293991 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:31:38.531796  293991 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:31:38.531800  293991 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:31:38.531803  293991 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:31:38.531806  293991 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:31:38.531809  293991 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:31:38.531820  293991 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:31:38.531824  293991 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:31:38.531827  293991 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:31:38.531831  293991 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:31:38.531835  293991 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:31:38.531838  293991 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:31:38.531845  293991 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:31:38.531852  293991 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:31:38.531857  293991 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:31:38.531860  293991 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:31:38.531863  293991 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:31:38.531868  293991 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:31:38.531871  293991 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:31:38.531874  293991 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:31:38.531886  293991 cri.go:89] found id: ""
	I1009 18:31:38.531938  293991 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:31:38.547840  293991 out.go:203] 
	W1009 18:31:38.550745  293991 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:31:38.550770  293991 out.go:285] * 
	* 
	W1009 18:31:38.557164  293991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:31:38.560215  293991 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.73s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.931801ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-419518
addons_test.go:332: (dbg) Run:  kubectl --context addons-419518 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.115381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:09.466865  295088 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:09.467772  295088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:09.467789  295088 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:09.467795  295088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:09.468059  295088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:09.468372  295088 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:09.468727  295088 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:09.468743  295088 addons.go:606] checking whether the cluster is paused
	I1009 18:32:09.468854  295088 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:09.468873  295088 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:09.469329  295088 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:09.486660  295088 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:09.486950  295088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:09.504497  295088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:09.608661  295088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:09.608740  295088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:09.639806  295088 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:09.639883  295088 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:09.639904  295088 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:09.639927  295088 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:09.639961  295088 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:09.639986  295088 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:09.640009  295088 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:09.640033  295088 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:09.640054  295088 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:09.640087  295088 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:09.640107  295088 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:09.640135  295088 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:09.640161  295088 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:09.640183  295088 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:09.640211  295088 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:09.640241  295088 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:09.640276  295088 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:09.640306  295088 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:09.640327  295088 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:09.640344  295088 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:09.640377  295088 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:09.640409  295088 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:09.640429  295088 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:09.640453  295088 cri.go:89] found id: ""
	I1009 18:32:09.640567  295088 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:09.657022  295088 out.go:203] 
	W1009 18:32:09.659963  295088 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:09.660042  295088 out.go:285] * 
	* 
	W1009 18:32:09.666519  295088 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:09.669578  295088 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-419518 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-419518 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-419518 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3565c790-8efa-4e61-93e4-f90562212e4a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3565c790-8efa-4e61-93e4-f90562212e4a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003622389s
I1009 18:32:00.942715  286309 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.025050283s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-419518 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-419518
helpers_test.go:243: (dbg) docker inspect addons-419518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321",
	        "Created": "2025-10-09T18:28:42.324058319Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:28:42.388519821Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/hostname",
	        "HostsPath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/hosts",
	        "LogPath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321-json.log",
	        "Name": "/addons-419518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-419518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-419518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321",
	                "LowerDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-419518",
	                "Source": "/var/lib/docker/volumes/addons-419518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-419518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-419518",
	                "name.minikube.sigs.k8s.io": "addons-419518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fc0cbe83a23ef0fe527d97f52e6000b554580b7bab280db2d5f49fb6bb2b55c",
	            "SandboxKey": "/var/run/docker/netns/8fc0cbe83a23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-419518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:b9:06:ae:9c:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0be5c0b9ee5b9c522294f1cb4a7d749e78a12a4263f461a27a66ca4494c30aa4",
	                    "EndpointID": "69fad6a6eebaff057d0c26eddb4dcf8abbccda04805e4585fdb55dbd7c187c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-419518",
	                        "56d0a47d6947"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-419518 -n addons-419518
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-419518 logs -n 25: (1.521955647s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-187653                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-187653 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ --download-only -p binary-mirror-572714 --alsologtostderr --binary-mirror http://127.0.0.1:37233 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-572714   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ delete  │ -p binary-mirror-572714                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-572714   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p addons-419518                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-419518                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ start   │ -p addons-419518 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:31 UTC │
	│ addons  │ addons-419518 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ addons-419518 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-419518 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ addons-419518 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ ip      │ addons-419518 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │ 09 Oct 25 18:31 UTC │
	│ addons  │ addons-419518 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ addons-419518 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ addons-419518 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ ssh     │ addons-419518 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ addons  │ addons-419518 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ addons  │ addons-419518 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-419518                                                                                                                                                                                                                                                                                                                                                                                           │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │ 09 Oct 25 18:32 UTC │
	│ addons  │ addons-419518 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ ssh     │ addons-419518 ssh cat /opt/local-path-provisioner/pvc-d2bf55d1-4477-4a8c-afa5-aa8f7149764a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │ 09 Oct 25 18:32 UTC │
	│ addons  │ addons-419518 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ addons  │ addons-419518 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ addons  │ addons-419518 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ addons  │ addons-419518 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:32 UTC │                     │
	│ ip      │ addons-419518 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:34 UTC │ 09 Oct 25 18:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:16.383310  287073 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:16.383978  287073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:16.383993  287073 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:16.383998  287073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:16.384287  287073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:28:16.384772  287073 out.go:368] Setting JSON to false
	I1009 18:28:16.385563  287073 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4248,"bootTime":1760030249,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:28:16.385629  287073 start.go:141] virtualization:  
	I1009 18:28:16.389206  287073 out.go:179] * [addons-419518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 18:28:16.393010  287073 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:16.393112  287073 notify.go:220] Checking for updates...
	I1009 18:28:16.398930  287073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:16.401830  287073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:28:16.404810  287073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:28:16.407697  287073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:28:16.410525  287073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:16.413499  287073 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:16.434348  287073 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:28:16.434485  287073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:16.494776  287073 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 18:28:16.486052862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:28:16.494878  287073 docker.go:318] overlay module found
	I1009 18:28:16.497866  287073 out.go:179] * Using the docker driver based on user configuration
	I1009 18:28:16.500723  287073 start.go:305] selected driver: docker
	I1009 18:28:16.500745  287073 start.go:925] validating driver "docker" against <nil>
	I1009 18:28:16.500759  287073 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:16.501469  287073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:16.557338  287073 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 18:28:16.548271133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:28:16.557506  287073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:28:16.557730  287073 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:28:16.560557  287073 out.go:179] * Using Docker driver with root privileges
	I1009 18:28:16.563357  287073 cni.go:84] Creating CNI manager for ""
	I1009 18:28:16.563425  287073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:16.563433  287073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:28:16.563526  287073 start.go:349] cluster config:
	{Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 18:28:16.568454  287073 out.go:179] * Starting "addons-419518" primary control-plane node in "addons-419518" cluster
	I1009 18:28:16.571245  287073 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:28:16.574115  287073 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:16.576915  287073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:16.576970  287073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:28:16.576983  287073 cache.go:64] Caching tarball of preloaded images
	I1009 18:28:16.576998  287073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:16.577080  287073 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 18:28:16.577095  287073 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:28:16.577424  287073 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/config.json ...
	I1009 18:28:16.577455  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/config.json: {Name:mk38bba8b563021566f9112ebaf96251a12ac9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:16.592694  287073 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:28:16.592843  287073 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:28:16.592863  287073 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 18:28:16.592868  287073 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 18:28:16.592875  287073 cache.go:165] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 18:28:16.592881  287073 cache.go:175] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1009 18:28:34.718653  287073 cache.go:177] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1009 18:28:34.718692  287073 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:34.718722  287073 start.go:360] acquireMachinesLock for addons-419518: {Name:mk799c7ee93ae50f3bf399d14394c57303eda19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:34.719454  287073 start.go:364] duration metric: took 694.245µs to acquireMachinesLock for "addons-419518"
	I1009 18:28:34.719490  287073 start.go:93] Provisioning new machine with config: &{Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:28:34.719560  287073 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:28:34.722955  287073 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 18:28:34.723189  287073 start.go:159] libmachine.API.Create for "addons-419518" (driver="docker")
	I1009 18:28:34.723234  287073 client.go:168] LocalClient.Create starting
	I1009 18:28:34.723347  287073 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 18:28:35.262299  287073 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 18:28:35.604897  287073 cli_runner.go:164] Run: docker network inspect addons-419518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:35.620494  287073 cli_runner.go:211] docker network inspect addons-419518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:35.620588  287073 network_create.go:284] running [docker network inspect addons-419518] to gather additional debugging logs...
	I1009 18:28:35.620611  287073 cli_runner.go:164] Run: docker network inspect addons-419518
	W1009 18:28:35.636031  287073 cli_runner.go:211] docker network inspect addons-419518 returned with exit code 1
	I1009 18:28:35.636064  287073 network_create.go:287] error running [docker network inspect addons-419518]: docker network inspect addons-419518: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-419518 not found
	I1009 18:28:35.636079  287073 network_create.go:289] output of [docker network inspect addons-419518]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-419518 not found
	
	** /stderr **
	I1009 18:28:35.636200  287073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:35.652176  287073 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05d40}
	I1009 18:28:35.652216  287073 network_create.go:124] attempt to create docker network addons-419518 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:28:35.652290  287073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-419518 addons-419518
	I1009 18:28:35.709367  287073 network_create.go:108] docker network addons-419518 192.168.49.0/24 created
	I1009 18:28:35.709403  287073 kic.go:121] calculated static IP "192.168.49.2" for the "addons-419518" container
	I1009 18:28:35.709491  287073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:35.725681  287073 cli_runner.go:164] Run: docker volume create addons-419518 --label name.minikube.sigs.k8s.io=addons-419518 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:35.743044  287073 oci.go:103] Successfully created a docker volume addons-419518
	I1009 18:28:35.743156  287073 cli_runner.go:164] Run: docker run --rm --name addons-419518-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-419518 --entrypoint /usr/bin/test -v addons-419518:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:28:37.813042  287073 cli_runner.go:217] Completed: docker run --rm --name addons-419518-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-419518 --entrypoint /usr/bin/test -v addons-419518:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.06983909s)
	I1009 18:28:37.813071  287073 oci.go:107] Successfully prepared a docker volume addons-419518
	I1009 18:28:37.813116  287073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:37.813127  287073 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:37.813186  287073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-419518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:42.249971  287073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-419518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.436740581s)
	I1009 18:28:42.250009  287073 kic.go:203] duration metric: took 4.436877352s to extract preloaded images to volume ...
	W1009 18:28:42.250193  287073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:28:42.250330  287073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:42.309082  287073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-419518 --name addons-419518 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-419518 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-419518 --network addons-419518 --ip 192.168.49.2 --volume addons-419518:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:28:42.615020  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Running}}
	I1009 18:28:42.637639  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:28:42.664945  287073 cli_runner.go:164] Run: docker exec addons-419518 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:42.723008  287073 oci.go:144] the created container "addons-419518" has a running status.
	I1009 18:28:42.723034  287073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa...
	I1009 18:28:43.255817  287073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:43.276303  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:28:43.293575  287073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:43.293598  287073 kic_runner.go:114] Args: [docker exec --privileged addons-419518 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:43.333460  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:28:43.350688  287073 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:43.350807  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:43.367047  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:43.367375  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:43.367390  287073 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:43.367961  287073 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36950->127.0.0.1:33140: read: connection reset by peer
	I1009 18:28:46.513684  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-419518
	
	I1009 18:28:46.513710  287073 ubuntu.go:182] provisioning hostname "addons-419518"
	I1009 18:28:46.513784  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:46.531372  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:46.531677  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:46.531700  287073 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-419518 && echo "addons-419518" | sudo tee /etc/hostname
	I1009 18:28:46.682750  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-419518
	
	I1009 18:28:46.682829  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:46.700476  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:46.700790  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:46.700812  287073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-419518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-419518/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-419518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:46.850553  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:46.850643  287073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 18:28:46.850702  287073 ubuntu.go:190] setting up certificates
	I1009 18:28:46.850739  287073 provision.go:84] configureAuth start
	I1009 18:28:46.850832  287073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-419518
	I1009 18:28:46.867008  287073 provision.go:143] copyHostCerts
	I1009 18:28:46.867095  287073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 18:28:46.867217  287073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 18:28:46.867272  287073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 18:28:46.867315  287073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.addons-419518 san=[127.0.0.1 192.168.49.2 addons-419518 localhost minikube]
	I1009 18:28:47.225390  287073 provision.go:177] copyRemoteCerts
	I1009 18:28:47.225466  287073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:47.225536  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.243705  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.346245  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:47.364220  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:28:47.380670  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:28:47.397726  287073 provision.go:87] duration metric: took 546.958716ms to configureAuth
	I1009 18:28:47.397751  287073 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:47.397960  287073 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:47.398072  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.416549  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:47.416853  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:47.416868  287073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:28:47.670421  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:28:47.670538  287073 machine.go:96] duration metric: took 4.319827498s to provisionDockerMachine
	I1009 18:28:47.670607  287073 client.go:171] duration metric: took 12.947333618s to LocalClient.Create
	I1009 18:28:47.670654  287073 start.go:167] duration metric: took 12.947463956s to libmachine.API.Create "addons-419518"
	I1009 18:28:47.670685  287073 start.go:293] postStartSetup for "addons-419518" (driver="docker")
	I1009 18:28:47.670727  287073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:47.670820  287073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:47.670953  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.689525  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.798162  287073 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:47.801282  287073 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:47.801312  287073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:47.801324  287073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 18:28:47.801389  287073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 18:28:47.801415  287073 start.go:296] duration metric: took 130.694173ms for postStartSetup
	I1009 18:28:47.801719  287073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-419518
	I1009 18:28:47.817688  287073 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/config.json ...
	I1009 18:28:47.817977  287073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:47.818026  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.834293  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.930862  287073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:47.935499  287073 start.go:128] duration metric: took 13.215925289s to createHost
	I1009 18:28:47.935521  287073 start.go:83] releasing machines lock for "addons-419518", held for 13.21605045s
	I1009 18:28:47.935609  287073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-419518
	I1009 18:28:47.951558  287073 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:47.951578  287073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:47.951610  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.951637  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.968735  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.975780  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:48.166035  287073 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:48.172300  287073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:28:48.208833  287073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:48.213094  287073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:48.213178  287073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:48.241954  287073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 18:28:48.241979  287073 start.go:495] detecting cgroup driver to use...
	I1009 18:28:48.242023  287073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:28:48.242089  287073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:28:48.259362  287073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:28:48.271911  287073 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:48.271999  287073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:48.289396  287073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:48.308160  287073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:48.422068  287073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:48.543662  287073 docker.go:234] disabling docker service ...
	I1009 18:28:48.543731  287073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:48.563889  287073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:48.576745  287073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:48.684979  287073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:48.802459  287073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:48.815400  287073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:48.829902  287073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:28:48.829982  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.838907  287073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:28:48.838984  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.847775  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.856619  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.865496  287073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:48.873358  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.882955  287073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.896264  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.904917  287073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:48.912659  287073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:48.919897  287073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:49.034967  287073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:28:49.173667  287073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:28:49.173818  287073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:28:49.177447  287073 start.go:563] Will wait 60s for crictl version
	I1009 18:28:49.177557  287073 ssh_runner.go:195] Run: which crictl
	I1009 18:28:49.181037  287073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:49.210098  287073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:28:49.210307  287073 ssh_runner.go:195] Run: crio --version
	I1009 18:28:49.237218  287073 ssh_runner.go:195] Run: crio --version
	I1009 18:28:49.270591  287073 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:28:49.273464  287073 cli_runner.go:164] Run: docker network inspect addons-419518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:49.289001  287073 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:49.292769  287073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:49.302073  287073 kubeadm.go:883] updating cluster {Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:49.302229  287073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:49.302283  287073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:49.339339  287073 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:49.339363  287073 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:28:49.339418  287073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:49.368132  287073 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:49.368156  287073 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:49.368165  287073 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:28:49.368249  287073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-419518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:49.368337  287073 ssh_runner.go:195] Run: crio config
	I1009 18:28:49.440594  287073 cni.go:84] Creating CNI manager for ""
	I1009 18:28:49.440618  287073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:49.440638  287073 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:49.440662  287073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-419518 NodeName:addons-419518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:49.440790  287073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-419518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:49.440867  287073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:28:49.448483  287073 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:49.448596  287073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:49.455882  287073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:28:49.468592  287073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:49.480918  287073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1009 18:28:49.492979  287073 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:49.496336  287073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:49.505609  287073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:49.616678  287073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:49.632180  287073 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518 for IP: 192.168.49.2
	I1009 18:28:49.632203  287073 certs.go:195] generating shared ca certs ...
	I1009 18:28:49.632221  287073 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:49.632352  287073 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 18:28:49.786119  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt ...
	I1009 18:28:49.786153  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt: {Name:mk1860adab5beccf33a1f32dfcd270757df005b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:49.786367  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key ...
	I1009 18:28:49.786382  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key: {Name:mk3320be062f4dee91fc84c7f329a34d237b7502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:49.787116  287073 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 18:28:50.399460  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt ...
	I1009 18:28:50.399491  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt: {Name:mk26acd207efcd41f9412775a1e0407b14d413d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:50.400257  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key ...
	I1009 18:28:50.400279  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key: {Name:mkc06cb89887ad60290183cd7568aaa19cef53d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:50.400365  287073 certs.go:257] generating profile certs ...
	I1009 18:28:50.400426  287073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.key
	I1009 18:28:50.400445  287073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt with IP's: []
	I1009 18:28:51.015473  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt ...
	I1009 18:28:51.015509  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: {Name:mk78dfa52f9042240dcabd55167ef3c11cf2e69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.015726  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.key ...
	I1009 18:28:51.015746  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.key: {Name:mkb16c588a54c1c2ed524db38307aaab1a59b1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.016468  287073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c
	I1009 18:28:51.016495  287073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:28:51.750977  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c ...
	I1009 18:28:51.751009  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c: {Name:mkc7c9dd7e400ec5f1b2f053bc73347849651ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.751200  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c ...
	I1009 18:28:51.751214  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c: {Name:mk8e7f48ee0436fbe12d13f9bfc9c29d4e972878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.751298  287073 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt
	I1009 18:28:51.751377  287073 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key
	I1009 18:28:51.751423  287073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key
	I1009 18:28:51.751446  287073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt with IP's: []
	I1009 18:28:53.143796  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt ...
	I1009 18:28:53.143827  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt: {Name:mka99b3fbd9ad4dfe6aa98d60282e420743894b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:53.144694  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key ...
	I1009 18:28:53.144713  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key: {Name:mkeba52bb76e31f0edf7518f31c096524489007f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:53.144909  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:53.144952  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:28:53.144983  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:53.145011  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 18:28:53.145584  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:53.164068  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:28:53.181369  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:53.198620  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:53.215940  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:28:53.233819  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:28:53.251298  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:53.268803  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:28:53.287014  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:53.304490  287073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:53.317015  287073 ssh_runner.go:195] Run: openssl version
	I1009 18:28:53.323339  287073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:53.331693  287073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:53.335290  287073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:53.335361  287073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:53.378772  287073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:53.388206  287073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:53.393070  287073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:28:53.393171  287073 kubeadm.go:400] StartCluster: {Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:53.393305  287073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:53.393417  287073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:53.427736  287073 cri.go:89] found id: ""
	I1009 18:28:53.427889  287073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:53.439190  287073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:53.448944  287073 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:28:53.449015  287073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:53.457463  287073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:28:53.457496  287073 kubeadm.go:157] found existing configuration files:
	
	I1009 18:28:53.457588  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:28:53.466258  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:28:53.466324  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:28:53.473783  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:28:53.481847  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:28:53.481917  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:53.490087  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:28:53.498107  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:28:53.498267  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:53.505788  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:28:53.513568  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:28:53.513704  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:53.521394  287073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:28:53.583888  287073 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 18:28:53.584142  287073 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 18:28:53.647634  287073 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:29:09.520579  287073 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:29:09.520655  287073 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:29:09.520781  287073 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:29:09.520871  287073 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 18:29:09.520914  287073 kubeadm.go:318] OS: Linux
	I1009 18:29:09.520974  287073 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:29:09.521033  287073 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 18:29:09.521084  287073 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:29:09.521145  287073 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:29:09.521214  287073 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:29:09.521279  287073 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:29:09.521338  287073 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:29:09.521392  287073 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:29:09.521445  287073 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 18:29:09.521532  287073 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:29:09.521634  287073 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:29:09.521751  287073 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:29:09.521820  287073 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:29:09.524797  287073 out.go:252]   - Generating certificates and keys ...
	I1009 18:29:09.524900  287073 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:29:09.525006  287073 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:29:09.525108  287073 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:29:09.525192  287073 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:29:09.525258  287073 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:29:09.525313  287073 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:29:09.525378  287073 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:29:09.525500  287073 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-419518 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:29:09.525556  287073 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:29:09.525674  287073 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-419518 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:29:09.525743  287073 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:29:09.525811  287073 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:29:09.525862  287073 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:29:09.525922  287073 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:29:09.525995  287073 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:29:09.526057  287073 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:29:09.526116  287073 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:29:09.526208  287073 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:29:09.526268  287073 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:29:09.526352  287073 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:29:09.526421  287073 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:29:09.529460  287073 out.go:252]   - Booting up control plane ...
	I1009 18:29:09.529590  287073 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:29:09.529682  287073 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:29:09.529774  287073 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:29:09.529902  287073 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:29:09.529997  287073 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:29:09.530102  287073 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:29:09.530212  287073 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:29:09.530305  287073 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:29:09.530450  287073 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:29:09.530560  287073 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:29:09.530624  287073 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.006155286s
	I1009 18:29:09.530725  287073 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:29:09.530810  287073 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:29:09.530902  287073 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:29:09.530984  287073 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:29:09.531062  287073 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.740782996s
	I1009 18:29:09.531132  287073 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.959780643s
	I1009 18:29:09.531202  287073 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502154429s
	I1009 18:29:09.531309  287073 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:29:09.531436  287073 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:29:09.531497  287073 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:29:09.531700  287073 kubeadm.go:318] [mark-control-plane] Marking the node addons-419518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:29:09.531760  287073 kubeadm.go:318] [bootstrap-token] Using token: oq7qdz.vhp4g7s58eo9w6q7
	I1009 18:29:09.534726  287073 out.go:252]   - Configuring RBAC rules ...
	I1009 18:29:09.534874  287073 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:29:09.534998  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:29:09.535188  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:29:09.535349  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:29:09.535484  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:29:09.535588  287073 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:29:09.535753  287073 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:29:09.535812  287073 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 18:29:09.535887  287073 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 18:29:09.535900  287073 kubeadm.go:318] 
	I1009 18:29:09.535975  287073 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 18:29:09.535983  287073 kubeadm.go:318] 
	I1009 18:29:09.536070  287073 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 18:29:09.536078  287073 kubeadm.go:318] 
	I1009 18:29:09.536106  287073 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 18:29:09.536183  287073 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:29:09.536242  287073 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:29:09.536250  287073 kubeadm.go:318] 
	I1009 18:29:09.536321  287073 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 18:29:09.536352  287073 kubeadm.go:318] 
	I1009 18:29:09.536419  287073 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:29:09.536458  287073 kubeadm.go:318] 
	I1009 18:29:09.536535  287073 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 18:29:09.536641  287073 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:29:09.536755  287073 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:29:09.536779  287073 kubeadm.go:318] 
	I1009 18:29:09.536915  287073 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:29:09.537042  287073 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 18:29:09.537050  287073 kubeadm.go:318] 
	I1009 18:29:09.537147  287073 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token oq7qdz.vhp4g7s58eo9w6q7 \
	I1009 18:29:09.537264  287073 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 18:29:09.537286  287073 kubeadm.go:318] 	--control-plane 
	I1009 18:29:09.537291  287073 kubeadm.go:318] 
	I1009 18:29:09.537387  287073 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:29:09.537392  287073 kubeadm.go:318] 
	I1009 18:29:09.537485  287073 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oq7qdz.vhp4g7s58eo9w6q7 \
	I1009 18:29:09.537613  287073 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 18:29:09.537622  287073 cni.go:84] Creating CNI manager for ""
	I1009 18:29:09.537629  287073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:29:09.540699  287073 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 18:29:09.543774  287073 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:29:09.547895  287073 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 18:29:09.547968  287073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:29:09.560979  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:29:09.853183  287073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:29:09.853382  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:09.853464  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-419518 minikube.k8s.io/updated_at=2025_10_09T18_29_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=addons-419518 minikube.k8s.io/primary=true
	I1009 18:29:10.027144  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:10.027225  287073 ops.go:34] apiserver oom_adj: -16
	I1009 18:29:10.528085  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:11.027672  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:11.527259  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:12.027930  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:12.528214  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:13.027912  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:13.528126  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:14.027429  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:14.527264  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:14.693411  287073 kubeadm.go:1113] duration metric: took 4.840074954s to wait for elevateKubeSystemPrivileges
	I1009 18:29:14.693456  287073 kubeadm.go:402] duration metric: took 21.300290128s to StartCluster
	I1009 18:29:14.693482  287073 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:29:14.693613  287073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:29:14.694436  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:29:14.695088  287073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:29:14.696090  287073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:29:14.696449  287073 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:29:14.696512  287073 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:29:14.696682  287073 addons.go:69] Setting yakd=true in profile "addons-419518"
	I1009 18:29:14.696703  287073 addons.go:238] Setting addon yakd=true in "addons-419518"
	I1009 18:29:14.696738  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.697414  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.699870  287073 addons.go:69] Setting metrics-server=true in profile "addons-419518"
	I1009 18:29:14.699893  287073 addons.go:238] Setting addon metrics-server=true in "addons-419518"
	I1009 18:29:14.699920  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.700441  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.701253  287073 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-419518"
	I1009 18:29:14.701347  287073 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-419518"
	I1009 18:29:14.701423  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.705638  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.711108  287073 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-419518"
	I1009 18:29:14.711145  287073 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-419518"
	I1009 18:29:14.711180  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.711733  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.719940  287073 addons.go:69] Setting cloud-spanner=true in profile "addons-419518"
	I1009 18:29:14.719973  287073 addons.go:238] Setting addon cloud-spanner=true in "addons-419518"
	I1009 18:29:14.720008  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.721574  287073 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-419518"
	I1009 18:29:14.721633  287073 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-419518"
	I1009 18:29:14.721658  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.722337  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.730759  287073 addons.go:69] Setting default-storageclass=true in profile "addons-419518"
	I1009 18:29:14.730769  287073 addons.go:69] Setting registry=true in profile "addons-419518"
	I1009 18:29:14.730792  287073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-419518"
	I1009 18:29:14.730799  287073 addons.go:238] Setting addon registry=true in "addons-419518"
	I1009 18:29:14.730840  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.731113  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.731324  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.743383  287073 addons.go:69] Setting registry-creds=true in profile "addons-419518"
	I1009 18:29:14.743437  287073 addons.go:238] Setting addon registry-creds=true in "addons-419518"
	I1009 18:29:14.743482  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.750064  287073 addons.go:69] Setting gcp-auth=true in profile "addons-419518"
	I1009 18:29:14.750107  287073 mustload.go:65] Loading cluster: addons-419518
	I1009 18:29:14.750511  287073 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:29:14.750785  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.751204  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.774347  287073 addons.go:69] Setting storage-provisioner=true in profile "addons-419518"
	I1009 18:29:14.774394  287073 addons.go:238] Setting addon storage-provisioner=true in "addons-419518"
	I1009 18:29:14.774435  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.775018  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.783593  287073 addons.go:69] Setting ingress=true in profile "addons-419518"
	I1009 18:29:14.783714  287073 addons.go:238] Setting addon ingress=true in "addons-419518"
	I1009 18:29:14.783817  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.784500  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.808583  287073 addons.go:69] Setting ingress-dns=true in profile "addons-419518"
	I1009 18:29:14.808673  287073 addons.go:238] Setting addon ingress-dns=true in "addons-419518"
	I1009 18:29:14.808758  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.809357  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.810121  287073 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-419518"
	I1009 18:29:14.810199  287073 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-419518"
	I1009 18:29:14.810724  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.841999  287073 addons.go:69] Setting inspektor-gadget=true in profile "addons-419518"
	I1009 18:29:14.842097  287073 addons.go:238] Setting addon inspektor-gadget=true in "addons-419518"
	I1009 18:29:14.842196  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.842920  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.843687  287073 addons.go:69] Setting volcano=true in profile "addons-419518"
	I1009 18:29:14.843769  287073 addons.go:238] Setting addon volcano=true in "addons-419518"
	I1009 18:29:14.843893  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.844881  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.865953  287073 out.go:179] * Verifying Kubernetes components...
	I1009 18:29:14.866344  287073 addons.go:69] Setting volumesnapshots=true in profile "addons-419518"
	I1009 18:29:14.866374  287073 addons.go:238] Setting addon volumesnapshots=true in "addons-419518"
	I1009 18:29:14.866414  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.866979  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.871792  287073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:29:14.881300  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.944933  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:29:14.951319  287073 addons.go:238] Setting addon default-storageclass=true in "addons-419518"
	I1009 18:29:14.951357  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.951872  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.996556  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:29:15.001421  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:29:15.004420  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1009 18:29:15.004579  287073 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:29:15.008285  287073 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:29:15.008321  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:29:15.008404  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.053184  287073 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-419518"
	I1009 18:29:15.053232  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:15.053679  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:15.076550  287073 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1009 18:29:15.076783  287073 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1009 18:29:15.091467  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:29:15.091500  287073 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:29:15.091582  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.104535  287073 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1009 18:29:15.117249  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:15.119263  287073 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 18:29:15.119291  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1009 18:29:15.119402  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.130392  287073 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1009 18:29:15.155406  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:29:15.155478  287073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:29:15.155591  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.181069  287073 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:29:15.181153  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:29:15.181276  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	W1009 18:29:15.202330  287073 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:29:15.202813  287073 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1009 18:29:15.204238  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:29:15.206095  287073 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:29:15.206117  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1009 18:29:15.206302  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.212815  287073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:29:15.231871  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:29:15.234916  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:29:15.238113  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:29:15.240723  287073 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1009 18:29:15.240811  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:29:15.245140  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:29:15.248272  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:29:15.248285  287073 out.go:179]   - Using image docker.io/registry:3.0.0
	I1009 18:29:15.248273  287073 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 18:29:15.248416  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:29:15.248427  287073 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:29:15.248501  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.264811  287073 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:29:15.264831  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:29:15.264903  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.248305  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1009 18:29:15.266622  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.290428  287073 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1009 18:29:15.248311  287073 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1009 18:29:15.293501  287073 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:29:15.293516  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:29:15.293584  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.300014  287073 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:29:15.300042  287073 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1009 18:29:15.300108  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.314058  287073 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:29:15.248315  287073 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:29:15.317168  287073 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:29:15.317191  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:29:15.317261  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.320325  287073 out.go:179]   - Using image docker.io/busybox:stable
	I1009 18:29:15.323253  287073 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:29:15.323275  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:29:15.323347  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.350687  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:29:15.353486  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:29:15.353523  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:29:15.353593  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.376175  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.377343  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.378075  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.380070  287073 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:29:15.380085  287073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:29:15.380178  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.383583  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.410768  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.416195  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.417498  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.463441  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.486304  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.492407  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.504855  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.510237  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.526395  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.541941  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.551380  287073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:29:15.551841  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:16.065175  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:29:16.091079  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 18:29:16.094472  287073 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:29:16.094544  287073 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:29:16.108747  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:29:16.108817  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:29:16.118628  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:29:16.139666  287073 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:29:16.139738  287073 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:29:16.141718  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:29:16.156979  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 18:29:16.161144  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:29:16.181333  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:29:16.190847  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:29:16.196619  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:29:16.198349  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:29:16.198418  287073 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:29:16.202978  287073 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:29:16.203048  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:29:16.231143  287073 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:16.231214  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1009 18:29:16.236664  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:29:16.236728  287073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:29:16.245661  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:29:16.245739  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:29:16.286459  287073 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:29:16.286529  287073 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:29:16.336654  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:29:16.337994  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:29:16.338052  287073 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:29:16.349772  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:29:16.349842  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:29:16.388923  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:16.428044  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:29:16.428126  287073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:29:16.450649  287073 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:29:16.450713  287073 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:29:16.493683  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:29:16.493754  287073 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:29:16.542580  287073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.329733513s)
	I1009 18:29:16.543476  287073 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:29:16.543443  287073 node_ready.go:35] waiting up to 6m0s for node "addons-419518" to be "Ready" ...
	I1009 18:29:16.562672  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:29:16.562744  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:29:16.609240  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:29:16.695644  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:29:16.695711  287073 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:29:16.697980  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:29:16.698045  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:29:16.743636  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:29:16.743708  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:29:16.856299  287073 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:29:16.856366  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:29:16.943239  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:29:16.973535  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:29:16.973607  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:29:17.048061  287073 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-419518" context rescaled to 1 replicas
	I1009 18:29:17.058910  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:29:17.176166  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:29:17.176194  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:29:17.407680  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:29:17.407753  287073 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:29:17.434293  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.369041906s)
	I1009 18:29:17.434570  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.34342767s)
	I1009 18:29:17.589427  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:29:17.589451  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:29:17.752418  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:29:17.752447  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:29:18.030200  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:29:18.030228  287073 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:29:18.279354  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1009 18:29:18.629317  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:19.315739  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.197029353s)
	I1009 18:29:20.706212  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.56441406s)
	I1009 18:29:20.706242  287073 addons.go:479] Verifying addon ingress=true in "addons-419518"
	I1009 18:29:20.706604  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.549551742s)
	I1009 18:29:20.706757  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.545541377s)
	I1009 18:29:20.706847  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.525439508s)
	I1009 18:29:20.706901  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.515985161s)
	I1009 18:29:20.706937  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.510258174s)
	I1009 18:29:20.706980  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.370254508s)
	I1009 18:29:20.706991  287073 addons.go:479] Verifying addon registry=true in "addons-419518"
	I1009 18:29:20.707050  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.318044923s)
	W1009 18:29:20.707071  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:20.707090  287073 retry.go:31] will retry after 275.330326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:20.707242  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.097928138s)
	I1009 18:29:20.707308  287073 addons.go:479] Verifying addon metrics-server=true in "addons-419518"
	I1009 18:29:20.707567  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.648625719s)
	W1009 18:29:20.707592  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:29:20.707607  287073 retry.go:31] will retry after 325.742627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:29:20.707694  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.764089943s)
	I1009 18:29:20.711604  287073 out.go:179] * Verifying ingress addon...
	I1009 18:29:20.711616  287073 out.go:179] * Verifying registry addon...
	I1009 18:29:20.713519  287073 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-419518 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:29:20.717024  287073 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:29:20.717888  287073 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:29:20.726513  287073 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:29:20.726534  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:20.731187  287073 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:29:20.731207  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:20.983196  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:21.033657  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1009 18:29:21.053664  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:21.234395  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.954992453s)
	I1009 18:29:21.234504  287073 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-419518"
	I1009 18:29:21.239605  287073 out.go:179] * Verifying csi-hostpath-driver addon...
	I1009 18:29:21.240334  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:21.241054  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:21.243299  287073 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:29:21.282209  287073 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:29:21.282233  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:21.722442  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:21.722610  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:21.822259  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:22.058681  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.075392156s)
	W1009 18:29:22.058733  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:22.058753  287073 retry.go:31] will retry after 411.018462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:22.221078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:22.221134  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:22.246984  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:22.470811  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:22.722205  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:22.722574  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:22.746895  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:22.808617  287073 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:29:22.808717  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:22.839223  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:22.972813  287073 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:29:22.994056  287073 addons.go:238] Setting addon gcp-auth=true in "addons-419518"
	I1009 18:29:22.994103  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:22.994582  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:23.023360  287073 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:29:23.023418  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:23.048449  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:23.222070  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:23.222818  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:23.247165  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:23.547137  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:23.721516  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:23.722247  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:23.748983  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:23.991210  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.957512159s)
	I1009 18:29:23.991322  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.520469268s)
	W1009 18:29:23.991577  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:23.991605  287073 retry.go:31] will retry after 503.16713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:23.994483  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:29:23.997485  287073 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1009 18:29:24.000296  287073 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:29:24.000325  287073 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:29:24.014714  287073 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:29:24.014748  287073 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:29:24.029786  287073 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:29:24.029812  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:29:24.044268  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:29:24.222403  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:24.223076  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:24.247596  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:24.495422  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:24.569499  287073 addons.go:479] Verifying addon gcp-auth=true in "addons-419518"
	I1009 18:29:24.572904  287073 out.go:179] * Verifying gcp-auth addon...
	I1009 18:29:24.576571  287073 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:29:24.590148  287073 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:29:24.590225  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:24.722030  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:24.722622  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:24.746846  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:25.082065  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:25.221446  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:25.221782  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:25.246555  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:25.340200  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:25.340229  287073 retry.go:31] will retry after 1.124225061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:29:25.547274  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:25.580320  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:25.720314  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:25.721535  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:25.746402  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:26.080659  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:26.220906  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:26.221588  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:26.246890  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:26.465296  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:26.579900  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:26.720643  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:26.721882  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:26.747445  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:27.080179  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:27.225131  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:27.225906  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:27.250162  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:27.277331  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:27.277363  287073 retry.go:31] will retry after 1.152654696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:29:27.547720  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:27.580756  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:27.721255  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:27.721431  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:27.747372  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:28.080834  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:28.221292  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:28.221416  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:28.246147  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:28.430297  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:28.580555  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:28.722990  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:28.723074  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:28.747846  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:29.080803  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:29.221489  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:29.222963  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:29.247375  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:29.277038  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:29.277071  287073 retry.go:31] will retry after 1.406240229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:29.586491  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:29.720613  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:29.721352  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:29.747076  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:30.048808  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:30.081398  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:30.220490  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:30.221220  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:30.247130  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:30.582458  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:30.683831  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:30.721875  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:30.722808  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:30.753951  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:31.080077  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:31.221570  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:31.221913  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:31.247379  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:31.520125  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:31.520158  287073 retry.go:31] will retry after 1.978715696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:31.579771  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:31.721180  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:31.721299  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:31.746988  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:32.082345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:32.220613  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:32.221188  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:32.247022  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:32.548294  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:32.586483  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:32.721715  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:32.722298  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:32.746255  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:33.079751  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:33.221169  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:33.221672  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:33.246561  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:33.499820  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:33.579913  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:33.719575  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:33.721581  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:33.747005  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:34.080562  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:34.220785  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:34.222616  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:34.246947  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:34.296830  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:34.296868  287073 retry.go:31] will retry after 5.768921432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:34.579790  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:34.720770  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:34.720966  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:34.746779  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:35.048017  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:35.081229  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:35.220418  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:35.221308  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:35.246314  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:35.581514  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:35.720433  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:35.720786  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:35.748469  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:36.081641  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:36.220947  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:36.221063  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:36.247152  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:36.583322  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:36.720147  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:36.721523  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:36.746619  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:37.080388  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:37.220452  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:37.221466  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:37.246813  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:37.547719  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:37.580941  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:37.721642  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:37.721781  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:37.746837  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:38.081418  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:38.221329  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:38.221430  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:38.246443  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:38.580158  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:38.721717  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:38.721949  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:38.746624  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:39.079713  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:39.221129  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:39.221188  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:39.246666  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:39.580526  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:39.721049  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:39.721213  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:39.747013  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:40.048348  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:40.066699  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:40.081043  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:40.221137  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:40.222610  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:40.246870  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:40.580187  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:40.722367  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:40.723093  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:40.746877  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:40.891201  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:40.891280  287073 retry.go:31] will retry after 6.829574361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:41.081033  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:41.221547  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:41.221827  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:41.246742  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:41.587429  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:41.721068  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:41.721392  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:41.746985  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:42.081944  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:42.221328  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:42.221702  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:42.247369  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:42.547238  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:42.580402  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:42.720892  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:42.721420  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:42.746840  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:43.079941  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:43.221324  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:43.221387  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:43.247247  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:43.580777  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:43.720752  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:43.721007  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:43.746664  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:44.081548  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:44.221776  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:44.221866  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:44.247037  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:44.548621  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:44.582637  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:44.720924  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:44.721123  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:44.747231  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:45.082476  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:45.221961  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:45.222153  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:45.247511  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:45.580317  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:45.720245  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:45.720884  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:45.746822  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:46.080441  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:46.221173  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:46.222560  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:46.246154  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:46.583159  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:46.719929  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:46.720907  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:46.746921  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:47.048004  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:47.080925  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:47.220960  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:47.221160  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:47.246883  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:47.581251  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:47.719974  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:47.720965  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:47.721250  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:47.747325  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:48.081500  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:48.222346  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:48.223213  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:48.246606  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:48.522974  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:48.523002  287073 retry.go:31] will retry after 6.073924032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:48.579934  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:48.721002  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:48.721214  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:48.747148  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:49.080404  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:49.220155  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:49.221959  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:49.247129  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:49.547217  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:49.580150  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:49.719884  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:49.721347  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:49.746880  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:50.080620  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:50.221074  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:50.221220  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:50.246005  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:50.580113  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:50.721077  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:50.721197  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:50.746975  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:51.080039  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:51.220096  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:51.221051  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:51.246788  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:51.548073  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:51.580269  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:51.721399  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:51.721953  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:51.746761  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:52.081141  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:52.219874  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:52.220986  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:52.247271  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:52.580784  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:52.720754  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:52.720949  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:52.746832  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:53.081444  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:53.221608  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:53.221870  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:53.246879  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:53.550000  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:53.580079  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:53.720019  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:53.720881  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:53.746901  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:54.080916  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:54.221504  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:54.221932  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:54.246900  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:54.586910  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:54.597301  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:54.722964  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:54.723098  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:54.747425  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:55.081888  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:55.222025  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:55.222400  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:55.246250  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:55.409982  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:55.410028  287073 retry.go:31] will retry after 15.275743812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:55.579852  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:55.722488  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:55.722926  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:55.750640  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:56.086041  287073 node_ready.go:49] node "addons-419518" is "Ready"
	I1009 18:29:56.086073  287073 node_ready.go:38] duration metric: took 39.541730878s for node "addons-419518" to be "Ready" ...
	I1009 18:29:56.086088  287073 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:29:56.086168  287073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:56.133821  287073 api_server.go:72] duration metric: took 41.438694191s to wait for apiserver process to appear ...
	I1009 18:29:56.133851  287073 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:29:56.133872  287073 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:29:56.171794  287073 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:29:56.177918  287073 api_server.go:141] control plane version: v1.34.1
	I1009 18:29:56.177965  287073 api_server.go:131] duration metric: took 44.091438ms to wait for apiserver health ...
	I1009 18:29:56.177998  287073 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:29:56.178406  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:56.196123  287073 system_pods.go:59] 19 kube-system pods found
	I1009 18:29:56.196174  287073 system_pods.go:61] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.196182  287073 system_pods.go:61] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending
	I1009 18:29:56.196231  287073 system_pods.go:61] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending
	I1009 18:29:56.196240  287073 system_pods.go:61] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending
	I1009 18:29:56.196246  287073 system_pods.go:61] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.196250  287073 system_pods.go:61] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.196255  287073 system_pods.go:61] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.196260  287073 system_pods.go:61] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.196300  287073 system_pods.go:61] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending
	I1009 18:29:56.196310  287073 system_pods.go:61] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.196315  287073 system_pods.go:61] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.196322  287073 system_pods.go:61] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.196331  287073 system_pods.go:61] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending
	I1009 18:29:56.196339  287073 system_pods.go:61] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.196346  287073 system_pods.go:61] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.196383  287073 system_pods.go:61] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending
	I1009 18:29:56.196400  287073 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending
	I1009 18:29:56.196408  287073 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending
	I1009 18:29:56.196419  287073 system_pods.go:61] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending
	I1009 18:29:56.196443  287073 system_pods.go:74] duration metric: took 18.436522ms to wait for pod list to return data ...
	I1009 18:29:56.196457  287073 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:29:56.204863  287073 default_sa.go:45] found service account: "default"
	I1009 18:29:56.204900  287073 default_sa.go:55] duration metric: took 8.434849ms for default service account to be created ...
	I1009 18:29:56.204910  287073 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:29:56.244331  287073 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:29:56.244357  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:56.244573  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:56.245160  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:56.245193  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.245199  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending
	I1009 18:29:56.245205  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending
	I1009 18:29:56.245209  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending
	I1009 18:29:56.245213  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.245218  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.245225  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.245229  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.245249  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending
	I1009 18:29:56.245259  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.245264  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.245272  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.245281  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending
	I1009 18:29:56.245289  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.245295  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.245303  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending
	I1009 18:29:56.245320  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.245330  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending
	I1009 18:29:56.245335  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending
	I1009 18:29:56.245351  287073 retry.go:31] will retry after 222.920831ms: missing components: kube-dns
	I1009 18:29:56.253008  287073 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:29:56.253032  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:56.501416  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:56.501467  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.501485  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:56.501491  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending
	I1009 18:29:56.501501  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending
	I1009 18:29:56.501505  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.501510  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.501527  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.501532  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.501549  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:56.501554  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.501565  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.501572  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.501576  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending
	I1009 18:29:56.501582  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.501591  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.501608  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending
	I1009 18:29:56.501616  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.501628  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.501634  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:29:56.501662  287073 retry.go:31] will retry after 268.332441ms: missing components: kube-dns
	I1009 18:29:56.598111  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:56.732338  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:56.732521  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:56.747113  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:56.776455  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:56.776503  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.776513  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:56.776521  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:29:56.776527  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:29:56.776532  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.776537  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.776541  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.776558  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.776571  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:56.776575  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.776580  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.776592  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.776600  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:29:56.776609  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.776619  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.776633  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:29:56.776642  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.776652  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.776662  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:29:56.776678  287073 retry.go:31] will retry after 427.584806ms: missing components: kube-dns
	I1009 18:29:57.080548  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:57.210875  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:57.210915  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:57.210934  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:57.210942  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:29:57.210949  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:29:57.210955  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:57.210961  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:57.210967  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:57.210971  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:57.210978  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:57.210986  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:57.210990  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:57.211006  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:57.211024  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:29:57.211031  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:57.211037  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:57.211046  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:29:57.211053  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.211062  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.211068  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:29:57.211089  287073 retry.go:31] will retry after 572.28595ms: missing components: kube-dns
	I1009 18:29:57.222182  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:57.222258  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:57.247774  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:57.580131  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:57.722254  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:57.722568  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:57.746777  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:57.788911  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:57.788958  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Running
	I1009 18:29:57.788969  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:57.788978  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:29:57.788993  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:29:57.789003  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:57.789008  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:57.789012  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:57.789029  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:57.789036  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:57.789042  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:57.789048  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:57.789057  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:57.789064  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:29:57.789071  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:57.789079  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:57.789088  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:29:57.789105  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.789116  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.789120  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Running
	I1009 18:29:57.789133  287073 system_pods.go:126] duration metric: took 1.584216594s to wait for k8s-apps to be running ...
	I1009 18:29:57.789140  287073 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:29:57.789209  287073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:29:57.803298  287073 system_svc.go:56] duration metric: took 14.148271ms WaitForService to wait for kubelet
	I1009 18:29:57.803335  287073 kubeadm.go:586] duration metric: took 43.108213029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:29:57.803363  287073 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:29:57.806749  287073 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 18:29:57.806779  287073 node_conditions.go:123] node cpu capacity is 2
	I1009 18:29:57.806794  287073 node_conditions.go:105] duration metric: took 3.42493ms to run NodePressure ...
	I1009 18:29:57.806805  287073 start.go:241] waiting for startup goroutines ...
	I1009 18:29:58.081547  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:58.223087  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:58.223254  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:58.247991  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:58.591603  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:58.724192  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:58.724673  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:58.750955  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:59.081024  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:59.225492  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:59.226095  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:59.249556  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:59.589061  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:59.722265  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:59.722401  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:59.746850  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:00.080981  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:00.247203  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:00.247331  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:00.257201  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:00.594648  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:00.725319  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:00.725546  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:00.748853  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:01.082683  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:01.225106  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:01.228160  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:01.253591  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:01.593698  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:01.726897  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:01.727371  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:01.756555  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:02.082311  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:02.227983  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:02.228731  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:02.249614  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:02.579727  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:02.721535  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:02.721697  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:02.749545  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:03.079732  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:03.222923  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:03.224296  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:03.247343  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:03.590345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:03.726429  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:03.726651  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:03.747746  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:04.080539  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:04.222943  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:04.223329  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:04.246946  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:04.580776  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:04.721483  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:04.722349  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:04.746812  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:05.080947  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:05.222263  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:05.222541  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:05.246974  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:05.580016  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:05.721698  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:05.722020  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:05.747521  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:06.080731  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:06.222345  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:06.222739  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:06.247228  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:06.580688  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:06.722181  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:06.722528  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:06.747108  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:07.081263  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:07.225050  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:07.225765  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:07.246747  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:07.580494  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:07.721148  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:07.721252  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:07.746930  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:08.080290  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:08.223016  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:08.223175  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:08.248121  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:08.580376  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:08.721569  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:08.721919  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:08.747839  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:09.080950  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:09.221845  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:09.222078  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:09.247653  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:09.582905  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:09.722449  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:09.722884  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:09.747687  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:10.081088  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:10.221221  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:10.221440  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:10.247248  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:10.580622  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:10.686857  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:30:10.724150  287073 kapi.go:107] duration metric: took 50.007124624s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:30:10.724570  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:10.748456  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:11.081428  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:11.221966  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:11.247111  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:11.580666  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:11.722281  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:11.749588  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:11.752509  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.065605336s)
	W1009 18:30:11.752544  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:11.752562  287073 retry.go:31] will retry after 24.244993909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:12.080809  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:12.221734  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:12.247346  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:12.581186  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:12.722184  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:12.748176  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:13.082202  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:13.221596  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:13.247210  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:13.588453  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:13.722017  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:13.747064  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:14.080756  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:14.221687  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:14.247521  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:14.579803  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:14.721249  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:14.746336  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:15.081332  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:15.222308  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:15.247375  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:15.586909  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:15.721142  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:15.749054  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:16.081371  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:16.221478  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:16.247239  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:16.579975  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:16.720898  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:16.746900  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:17.083216  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:17.222226  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:17.247959  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:17.581679  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:17.720845  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:17.747055  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:18.081679  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:18.222618  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:18.248080  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:18.612134  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:18.722006  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:18.748344  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:19.082377  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:19.223078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:19.248823  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:19.579846  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:19.721415  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:19.746635  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:20.082108  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:20.221306  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:20.246993  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:20.580964  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:20.725782  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:20.747124  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:21.082172  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:21.221879  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:21.247398  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:21.580398  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:21.721635  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:21.747185  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:22.081841  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:22.221003  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:22.247714  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:22.579745  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:22.721127  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:22.747778  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:23.082115  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:23.221512  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:23.247230  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:23.580432  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:23.721923  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:23.747558  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:24.080924  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:24.221389  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:24.247389  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:24.579772  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:24.721574  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:24.746988  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:25.081349  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:25.221987  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:25.248928  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:25.579831  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:25.729833  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:25.755177  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:26.080741  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:26.221804  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:26.247507  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:26.580749  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:26.722254  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:26.747707  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:27.081363  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:27.221801  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:27.248055  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:27.580535  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:27.722163  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:27.758648  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:28.080642  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:28.222228  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:28.246218  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:28.580037  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:28.721390  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:28.758077  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:29.081696  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:29.223628  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:29.249500  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:29.581400  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:29.723214  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:29.748963  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:30.092779  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:30.222586  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:30.248754  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:30.579768  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:30.722078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:30.747100  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:31.080888  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:31.221750  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:31.247740  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:31.581184  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:31.721298  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:31.747428  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:32.079904  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:32.221193  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:32.247385  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:32.581019  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:32.721229  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:32.748042  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:33.080980  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:33.221576  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:33.247345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:33.586389  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:33.722761  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:33.747712  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:34.079955  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:34.221662  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:34.248512  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:34.580530  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:34.721873  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:34.748104  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:35.082020  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:35.221209  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:35.247138  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:35.580129  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:35.721846  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:35.746970  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:35.997917  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:30:36.080874  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:36.221001  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:36.247626  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:36.581032  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:36.721413  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:36.747377  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:37.080330  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:37.164877  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.166856527s)
	W1009 18:30:37.164968  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:37.165002  287073 retry.go:31] will retry after 32.347620857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:37.221207  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:37.248115  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:37.580410  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:37.722187  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:37.748627  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:38.080697  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:38.222904  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:38.248304  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:38.588344  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:38.722596  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:38.747576  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:39.080742  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:39.221467  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:39.247506  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:39.580223  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:39.721579  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:39.747903  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:40.082149  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:40.222361  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:40.249990  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:40.590991  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:40.722493  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:40.749556  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:41.080579  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:41.222055  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:41.248105  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:41.580578  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:41.722278  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:41.747383  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:42.081533  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:42.224140  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:42.324545  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:42.580644  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:42.721853  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:42.747554  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:43.080793  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:43.221206  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:43.247798  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:43.580183  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:43.721617  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:43.746841  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:44.080768  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:44.221108  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:44.248978  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:44.580109  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:44.723875  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:44.753774  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:45.085590  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:45.221979  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:45.248585  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:45.582683  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:45.720898  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:45.747040  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:46.081136  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:46.220939  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:46.248122  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:46.580506  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:46.722472  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:46.747212  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:47.080362  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:47.222078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:47.247470  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:47.583395  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:47.723405  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:47.746313  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:48.080495  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:48.222052  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:48.247468  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:48.579499  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:48.721649  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:48.756884  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:49.081130  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:49.222445  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:49.246305  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:49.580425  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:49.721214  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:49.747616  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:50.080150  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:50.221202  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:50.247355  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:50.579517  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:50.722120  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:50.747125  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:51.080867  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:51.221234  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:51.247178  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:51.588627  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:51.722641  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:51.747191  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:52.080784  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:52.228052  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:52.250642  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:52.580883  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:52.722092  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:52.747405  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:53.082800  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:53.228696  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:53.246635  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:53.579735  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:53.722082  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:53.747481  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:54.080937  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:54.221967  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:54.247352  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:54.589166  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:54.721954  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:54.747734  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:55.094282  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:55.223338  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:55.324309  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:55.587606  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:55.722306  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:55.749251  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:56.080481  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:56.222931  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:56.247862  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:56.583830  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:56.721098  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:56.747767  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:57.080336  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:57.222041  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:57.247770  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:57.579408  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:57.721788  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:57.747034  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:58.080345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:58.222318  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:58.246612  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:58.580290  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:58.721595  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:58.746601  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:59.081446  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:59.221919  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:59.247674  287073 kapi.go:107] duration metric: took 1m38.004370654s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:30:59.586059  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:59.721523  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:00.095427  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:00.227344  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:00.586437  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:00.722177  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:01.081164  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:01.221885  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:01.580837  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:01.721175  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:02.080208  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:02.221238  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:02.579564  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:02.722637  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:03.079965  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:03.221621  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:03.580165  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:03.721185  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:04.080738  287073 kapi.go:107] duration metric: took 1m39.504163254s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:31:04.084577  287073 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-419518 cluster.
	I1009 18:31:04.088459  287073 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:31:04.092294  287073 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:31:04.223823  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:04.721708  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:05.222426  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:05.721628  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:06.221985  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:06.721887  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:07.224289  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:07.722163  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:08.222889  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:08.721612  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:09.224157  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:09.513353  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:31:09.721285  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:10.221824  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:10.697801  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.184355167s)
	W1009 18:31:10.697901  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:31:10.698030  287073 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:31:10.722318  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:11.222422  287073 kapi.go:107] duration metric: took 1m50.504528082s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:31:11.225365  287073 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, ingress-dns, registry-creds, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1009 18:31:11.228241  287073 addons.go:514] duration metric: took 1m56.53171651s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass ingress-dns registry-creds storage-provisioner nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1009 18:31:11.228298  287073 start.go:246] waiting for cluster config update ...
	I1009 18:31:11.228325  287073 start.go:255] writing updated cluster config ...
	I1009 18:31:11.228659  287073 ssh_runner.go:195] Run: rm -f paused
	I1009 18:31:11.232609  287073 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:31:11.236104  287073 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ts42b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.243006  287073 pod_ready.go:94] pod "coredns-66bc5c9577-ts42b" is "Ready"
	I1009 18:31:11.243036  287073 pod_ready.go:86] duration metric: took 6.9008ms for pod "coredns-66bc5c9577-ts42b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.245619  287073 pod_ready.go:83] waiting for pod "etcd-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.250868  287073 pod_ready.go:94] pod "etcd-addons-419518" is "Ready"
	I1009 18:31:11.250946  287073 pod_ready.go:86] duration metric: took 5.299004ms for pod "etcd-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.253249  287073 pod_ready.go:83] waiting for pod "kube-apiserver-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.258056  287073 pod_ready.go:94] pod "kube-apiserver-addons-419518" is "Ready"
	I1009 18:31:11.258170  287073 pod_ready.go:86] duration metric: took 4.850895ms for pod "kube-apiserver-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.323035  287073 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.636646  287073 pod_ready.go:94] pod "kube-controller-manager-addons-419518" is "Ready"
	I1009 18:31:11.636681  287073 pod_ready.go:86] duration metric: took 313.617028ms for pod "kube-controller-manager-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.837557  287073 pod_ready.go:83] waiting for pod "kube-proxy-lrwp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.237807  287073 pod_ready.go:94] pod "kube-proxy-lrwp7" is "Ready"
	I1009 18:31:12.237839  287073 pod_ready.go:86] duration metric: took 400.253129ms for pod "kube-proxy-lrwp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.437038  287073 pod_ready.go:83] waiting for pod "kube-scheduler-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.836452  287073 pod_ready.go:94] pod "kube-scheduler-addons-419518" is "Ready"
	I1009 18:31:12.836480  287073 pod_ready.go:86] duration metric: took 399.366818ms for pod "kube-scheduler-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.836493  287073 pod_ready.go:40] duration metric: took 1.603846792s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:31:12.892140  287073 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 18:31:12.895196  287073 out.go:179] * Done! kubectl is now configured to use "addons-419518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 18:34:09 addons-419518 crio[829]: time="2025-10-09T18:34:09.005590328Z" level=info msg="Removed container 6fac5209501fd4a1bb0d7328835f083003f2cada64c11699217f3f8737be0406: kube-system/registry-creds-764b6fb674-d8wvd/registry-creds" id=212be0b9-d969-4df7-8544-d4dbd8b5bebb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.48109403Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-t7zkv/POD" id=4d01a93b-0cb4-486d-a362-0862646248f8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.481175556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.492292829Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-t7zkv Namespace:default ID:f7e6d9d8bcb4989904503cbae90fb5c3fdeaac48fbd3f2d19bd3a7332055e356 UID:f5d7e6e6-7e31-44bc-9b67-efa211ec52ef NetNS:/var/run/netns/b2afe812-862e-4cf1-98be-f3ac902f8910 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000799b0}] Aliases:map[]}"
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.492350076Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-t7zkv to CNI network \"kindnet\" (type=ptp)"
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.508663574Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-t7zkv Namespace:default ID:f7e6d9d8bcb4989904503cbae90fb5c3fdeaac48fbd3f2d19bd3a7332055e356 UID:f5d7e6e6-7e31-44bc-9b67-efa211ec52ef NetNS:/var/run/netns/b2afe812-862e-4cf1-98be-f3ac902f8910 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000799b0}] Aliases:map[]}"
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.508825216Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-t7zkv for CNI network kindnet (type=ptp)"
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.513380238Z" level=info msg="Ran pod sandbox f7e6d9d8bcb4989904503cbae90fb5c3fdeaac48fbd3f2d19bd3a7332055e356 with infra container: default/hello-world-app-5d498dc89-t7zkv/POD" id=4d01a93b-0cb4-486d-a362-0862646248f8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.515828536Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=43804174-2c5b-4150-b996-10b9556b79b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.516076784Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=43804174-2c5b-4150-b996-10b9556b79b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.516245409Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=43804174-2c5b-4150-b996-10b9556b79b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.517551036Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0130431e-ef6d-4307-a6c4-cfa765b95676 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:34:12 addons-419518 crio[829]: time="2025-10-09T18:34:12.519030686Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.256883107Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0130431e-ef6d-4307-a6c4-cfa765b95676 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.257577233Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3d85fcf0-f91b-4282-bc14-2a844905d940 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.261974182Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=02b4fd84-0945-40ef-a195-8aa5e4bf77a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.270412369Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-t7zkv/hello-world-app" id=08750abc-707b-4eea-b123-d7e908e1e617 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.271353349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.280325292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.280531923Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7293096eda486f5676b6777945b74813f6b84058363b17a4843f5344250280a0/merged/etc/passwd: no such file or directory"
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.280600264Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7293096eda486f5676b6777945b74813f6b84058363b17a4843f5344250280a0/merged/etc/group: no such file or directory"
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.280876681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.303405513Z" level=info msg="Created container e0b1b702e4ea0409b25ac69076644a78cba7ae0dece7c21e35f2f6b1ca83084e: default/hello-world-app-5d498dc89-t7zkv/hello-world-app" id=08750abc-707b-4eea-b123-d7e908e1e617 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.306900662Z" level=info msg="Starting container: e0b1b702e4ea0409b25ac69076644a78cba7ae0dece7c21e35f2f6b1ca83084e" id=28324d22-3e94-4974-8893-9c6116419dc1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 18:34:13 addons-419518 crio[829]: time="2025-10-09T18:34:13.311112248Z" level=info msg="Started container" PID=7215 containerID=e0b1b702e4ea0409b25ac69076644a78cba7ae0dece7c21e35f2f6b1ca83084e description=default/hello-world-app-5d498dc89-t7zkv/hello-world-app id=28324d22-3e94-4974-8893-9c6116419dc1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7e6d9d8bcb4989904503cbae90fb5c3fdeaac48fbd3f2d19bd3a7332055e356
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	e0b1b702e4ea0       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   f7e6d9d8bcb49       hello-world-app-5d498dc89-t7zkv            default
	de8d4de4f53d3       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             5 seconds ago            Exited              registry-creds                           1                   554e6d65f2c95       registry-creds-764b6fb674-d8wvd            kube-system
	79b9e2bd3b388       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   27d823104c124       nginx                                      default
	1c7d7c222a928       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   8fb427be72f09       busybox                                    default
	b221e7724dee9       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago            Running             controller                               0                   6625833a2a108       ingress-nginx-controller-9cc49f96f-vm584   ingress-nginx
	acfe289f616bb       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             3 minutes ago            Exited              patch                                    3                   754f8a2ebbf5d       ingress-nginx-admission-patch-rv5vh        ingress-nginx
	3dfa8f6e3d04c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   9811954f45084       gcp-auth-78565c9fb4-8tvnl                  gcp-auth
	e0fa427a81a17       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	5fe03c686f8b2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	d925680a7d245       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	ca2e6448f0dce       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	caab66aa10413       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	79ba2ae1cb348       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   cb36360f31efb       gadget-xxzz7                               gadget
	f3b3020865ddf       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   62feeecd9740e       local-path-provisioner-648f6765c9-jjjkb    local-path-storage
	46ac337e6073b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   ca59c8f9749c8       snapshot-controller-7d9fbc56b8-gdcjj       kube-system
	8c0f1f4eee998       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   8364d021775bc       snapshot-controller-7d9fbc56b8-fjjl6       kube-system
	b7dc868dfc33f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	0a59f751353f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   3 minutes ago            Exited              create                                   0                   c19aa1c710a4b       ingress-nginx-admission-create-bmpfw       ingress-nginx
	0d83359f6789f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   73122c095849b       csi-hostpath-attacher-0                    kube-system
	c8fcd3e8370a3       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   3e0055f94ae6f       kube-ingress-dns-minikube                  kube-system
	903817dace553       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   95b7fc628115c       nvidia-device-plugin-daemonset-qtz2j       kube-system
	52c50a08ae9a2       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   b725cddf4a0fc       cloud-spanner-emulator-86bd5cbb97-2zfvh    default
	4ffab12f1d2fe       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   09b4de9e89fae       registry-proxy-4qmrl                       kube-system
	ff3eb8e48ecc2       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   cd04638ce2edb       yakd-dashboard-5ff678cb9-hjpsv             yakd-dashboard
	88ad8c9fc37d9       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   a8999c36d49ae       csi-hostpath-resizer-0                     kube-system
	013cbdf8660e8       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           4 minutes ago            Running             registry                                 0                   1c91f319ad4b3       registry-66898fdd98-vd6nz                  kube-system
	612bd221adece       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   e48ffaa6cfb7c       metrics-server-85b7d694d7-qbwpc            kube-system
	57430c58fdb35       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   96cd852ddebea       coredns-66bc5c9577-ts42b                   kube-system
	8511bdac64fcf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   5fc9ec431ae78       storage-provisioner                        kube-system
	0fdd586f76a51       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   af7d00d05d8fc       kindnet-kvxfh                              kube-system
	d6540d20da4ee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   8d34a1ddf8642       kube-proxy-lrwp7                           kube-system
	7fe0435ff5aac       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   e03c2eab84bcd       kube-scheduler-addons-419518               kube-system
	a04d990d6cdb2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   cd2f2771fd24c       etcd-addons-419518                         kube-system
	3fa30ba6794d2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   ffb3f563e0172       kube-apiserver-addons-419518               kube-system
	fea680bc13a62       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   2067cb7d0ec25       kube-controller-manager-addons-419518      kube-system
	
	
	==> coredns [57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9] <==
	[INFO] 10.244.0.4:34379 - 18840 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002378536s
	[INFO] 10.244.0.4:34379 - 61047 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000295296s
	[INFO] 10.244.0.4:34379 - 8853 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.0001163s
	[INFO] 10.244.0.4:48062 - 54173 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193601s
	[INFO] 10.244.0.4:48062 - 54403 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000277547s
	[INFO] 10.244.0.4:36761 - 29102 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110573s
	[INFO] 10.244.0.4:36761 - 28881 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088714s
	[INFO] 10.244.0.4:41135 - 42835 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097666s
	[INFO] 10.244.0.4:41135 - 42614 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067258s
	[INFO] 10.244.0.4:45410 - 62091 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001721638s
	[INFO] 10.244.0.4:45410 - 61894 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001744227s
	[INFO] 10.244.0.4:46810 - 30759 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00018332s
	[INFO] 10.244.0.4:46810 - 30578 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096517s
	[INFO] 10.244.0.20:39207 - 22968 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000310557s
	[INFO] 10.244.0.20:48270 - 64397 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000198081s
	[INFO] 10.244.0.20:52084 - 21293 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149547s
	[INFO] 10.244.0.20:37321 - 41772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000299743s
	[INFO] 10.244.0.20:49323 - 34038 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015534s
	[INFO] 10.244.0.20:54315 - 64541 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162462s
	[INFO] 10.244.0.20:40486 - 26803 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002195703s
	[INFO] 10.244.0.20:49244 - 40289 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001827611s
	[INFO] 10.244.0.20:49308 - 27729 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003172331s
	[INFO] 10.244.0.20:58620 - 50870 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003394904s
	[INFO] 10.244.0.23:44299 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000229769s
	[INFO] 10.244.0.23:33439 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00020321s
	
	
	==> describe nodes <==
	Name:               addons-419518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-419518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=addons-419518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T18_29_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-419518
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-419518"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 18:29:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-419518
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 18:34:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 18:33:45 +0000   Thu, 09 Oct 2025 18:29:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 18:33:45 +0000   Thu, 09 Oct 2025 18:29:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 18:33:45 +0000   Thu, 09 Oct 2025 18:29:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 18:33:45 +0000   Thu, 09 Oct 2025 18:29:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-419518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0aaba348a6d44fe9392eb659a3419c0
	  System UUID:                84f031e7-c237-48a4-afe2-ac0fc5df6eb2
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-86bd5cbb97-2zfvh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  default                     hello-world-app-5d498dc89-t7zkv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-xxzz7                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  gcp-auth                    gcp-auth-78565c9fb4-8tvnl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-vm584    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m54s
	  kube-system                 coredns-66bc5c9577-ts42b                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 csi-hostpathplugin-p2zpw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 etcd-addons-419518                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m5s
	  kube-system                 kindnet-kvxfh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m
	  kube-system                 kube-apiserver-addons-419518                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-addons-419518       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-lrwp7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-addons-419518                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 metrics-server-85b7d694d7-qbwpc             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m55s
	  kube-system                 nvidia-device-plugin-daemonset-qtz2j        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 registry-66898fdd98-vd6nz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 registry-creds-764b6fb674-d8wvd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-proxy-4qmrl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-fjjl6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-gdcjj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-648f6765c9-jjjkb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-hjpsv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m58s                  kube-proxy       
	  Warning  CgroupV1                 5m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node addons-419518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node addons-419518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m13s (x8 over 5m13s)  kubelet          Node addons-419518 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m5s                   kubelet          Node addons-419518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m5s                   kubelet          Node addons-419518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m5s                   kubelet          Node addons-419518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m1s                   node-controller  Node addons-419518 event: Registered Node addons-419518 in Controller
	  Normal   NodeReady                4m19s                  kubelet          Node addons-419518 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014502] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.555614] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757222] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.781088] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 14209023 ns
	[Oct 9 18:26] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 18:29] overlayfs: idmapped layers are currently not supported
	[  +0.074293] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d] <==
	{"level":"warn","ts":"2025-10-09T18:29:04.960829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:04.995746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.020008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.053812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.077820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.109618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.139672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.162768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.189982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.218357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.243739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.304637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.319368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.341944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.367339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.390986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.406320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.423061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.518248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:21.569884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:21.591768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.417450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.431728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.479759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.494561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38804","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [3dfa8f6e3d04c4610e97aec90197e6d5807c4a90d87755a7c02194eda4f6660a] <==
	2025/10/09 18:31:03 GCP Auth Webhook started!
	2025/10/09 18:31:13 Ready to marshal response ...
	2025/10/09 18:31:13 Ready to write response ...
	2025/10/09 18:31:13 Ready to marshal response ...
	2025/10/09 18:31:13 Ready to write response ...
	2025/10/09 18:31:14 Ready to marshal response ...
	2025/10/09 18:31:14 Ready to write response ...
	2025/10/09 18:31:34 Ready to marshal response ...
	2025/10/09 18:31:34 Ready to write response ...
	2025/10/09 18:31:40 Ready to marshal response ...
	2025/10/09 18:31:40 Ready to write response ...
	2025/10/09 18:31:50 Ready to marshal response ...
	2025/10/09 18:31:50 Ready to write response ...
	2025/10/09 18:32:00 Ready to marshal response ...
	2025/10/09 18:32:00 Ready to write response ...
	2025/10/09 18:32:10 Ready to marshal response ...
	2025/10/09 18:32:10 Ready to write response ...
	2025/10/09 18:32:10 Ready to marshal response ...
	2025/10/09 18:32:10 Ready to write response ...
	2025/10/09 18:32:18 Ready to marshal response ...
	2025/10/09 18:32:18 Ready to write response ...
	2025/10/09 18:34:12 Ready to marshal response ...
	2025/10/09 18:34:12 Ready to write response ...
	
	
	==> kernel <==
	 18:34:14 up  1:16,  0 user,  load average: 0.48, 1.64, 2.70
	Linux addons-419518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409] <==
	I1009 18:32:05.154513       1 main.go:301] handling current node
	I1009 18:32:15.154918       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:32:15.155038       1 main.go:301] handling current node
	I1009 18:32:25.155334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:32:25.155375       1 main.go:301] handling current node
	I1009 18:32:35.155939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:32:35.155979       1 main.go:301] handling current node
	I1009 18:32:45.154526       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:32:45.154562       1 main.go:301] handling current node
	I1009 18:32:55.154573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:32:55.154609       1 main.go:301] handling current node
	I1009 18:33:05.155940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:33:05.155976       1 main.go:301] handling current node
	I1009 18:33:15.154488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:33:15.154526       1 main.go:301] handling current node
	I1009 18:33:25.155405       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:33:25.155547       1 main.go:301] handling current node
	I1009 18:33:35.154413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:33:35.154451       1 main.go:301] handling current node
	I1009 18:33:45.154607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:33:45.154660       1 main.go:301] handling current node
	I1009 18:33:55.154677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:33:55.154712       1 main.go:301] handling current node
	I1009 18:34:05.155933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:34:05.155973       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade] <==
	I1009 18:29:24.399980       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.235.159"}
	W1009 18:29:43.417411       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:43.431756       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:43.478936       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:43.493698       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:55.747005       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.235.159:443: connect: connection refused
	E1009 18:29:55.747134       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.235.159:443: connect: connection refused" logger="UnhandledError"
	W1009 18:29:55.747538       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.235.159:443: connect: connection refused
	E1009 18:29:55.747606       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.235.159:443: connect: connection refused" logger="UnhandledError"
	W1009 18:29:55.824524       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.235.159:443: connect: connection refused
	E1009 18:29:55.825371       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.235.159:443: connect: connection refused" logger="UnhandledError"
	W1009 18:30:11.360177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 18:30:11.360252       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 18:30:11.360590       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.152.104:443: connect: connection refused" logger="UnhandledError"
	E1009 18:30:11.364174       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.152.104:443: connect: connection refused" logger="UnhandledError"
	I1009 18:30:11.484383       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1009 18:31:22.221251       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57710: use of closed network connection
	E1009 18:31:22.443157       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57750: use of closed network connection
	I1009 18:31:50.633093       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:31:50.929742       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.181.5"}
	I1009 18:31:51.614999       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:34:12.373287       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.160.3"}
	
	
	==> kube-controller-manager [fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223] <==
	I1009 18:29:13.421787       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 18:29:13.421856       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 18:29:13.421880       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 18:29:13.421940       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 18:29:13.421961       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 18:29:13.431234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:29:13.432293       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-419518" podCIDRs=["10.244.0.0/24"]
	I1009 18:29:13.444554       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 18:29:13.444659       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 18:29:13.444564       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 18:29:13.444754       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-419518"
	I1009 18:29:13.444801       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 18:29:13.447136       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 18:29:13.447144       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 18:29:13.448286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 18:29:13.451128       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1009 18:29:19.625982       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1009 18:29:43.410429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 18:29:43.410592       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1009 18:29:43.410644       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1009 18:29:43.455818       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1009 18:29:43.471869       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1009 18:29:43.510807       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 18:29:43.572629       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:29:58.454312       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173] <==
	I1009 18:29:15.082486       1 server_linux.go:53] "Using iptables proxy"
	I1009 18:29:15.394040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 18:29:15.497067       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 18:29:15.497101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 18:29:15.497187       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:29:15.587124       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:29:15.590443       1 server_linux.go:132] "Using iptables Proxier"
	I1009 18:29:15.596070       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:29:15.596427       1 server.go:527] "Version info" version="v1.34.1"
	I1009 18:29:15.596451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:29:15.597755       1 config.go:200] "Starting service config controller"
	I1009 18:29:15.597778       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 18:29:15.597795       1 config.go:106] "Starting endpoint slice config controller"
	I1009 18:29:15.597799       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 18:29:15.597809       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 18:29:15.597813       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 18:29:15.603858       1 config.go:309] "Starting node config controller"
	I1009 18:29:15.603876       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 18:29:15.603884       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 18:29:15.698089       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 18:29:15.698145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 18:29:15.698187       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa] <==
	E1009 18:29:06.569750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 18:29:06.569806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 18:29:06.569857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 18:29:06.569902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 18:29:06.569992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 18:29:06.570042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 18:29:06.570084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 18:29:06.571706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 18:29:06.571778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 18:29:06.571850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 18:29:06.571913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 18:29:06.571962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 18:29:06.571999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 18:29:06.572042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 18:29:06.572092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 18:29:06.572128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 18:29:06.573024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 18:29:06.573120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 18:29:07.392210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 18:29:07.421855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 18:29:07.535347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 18:29:07.555047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 18:29:07.617419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 18:29:07.801055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 18:29:09.536477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.324051    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c" (UID: "a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.329275    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c-kube-api-access-wnqfs" (OuterVolumeSpecName: "kube-api-access-wnqfs") pod "a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c" (UID: "a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c"). InnerVolumeSpecName "kube-api-access-wnqfs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.424120    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c-gcp-creds\") on node \"addons-419518\" DevicePath \"\""
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.424165    1283 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c-data\") on node \"addons-419518\" DevicePath \"\""
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.424177    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wnqfs\" (UniqueName: \"kubernetes.io/projected/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c-kube-api-access-wnqfs\") on node \"addons-419518\" DevicePath \"\""
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.424191    1283 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c-script\") on node \"addons-419518\" DevicePath \"\""
	Oct 09 18:32:20 addons-419518 kubelet[1283]: I1009 18:32:20.949440    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c" path="/var/lib/kubelet/pods/a62a8bdd-dc4b-4619-8d7b-0cdb9e72597c/volumes"
	Oct 09 18:32:21 addons-419518 kubelet[1283]: I1009 18:32:21.224767    1283 scope.go:117] "RemoveContainer" containerID="663ead38e8a3f3b3eab5560bf48150b276b39f78ba488b768ee53f77160731de"
	Oct 09 18:32:31 addons-419518 kubelet[1283]: I1009 18:32:31.946443    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-vd6nz" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:32:44 addons-419518 kubelet[1283]: I1009 18:32:44.946182    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4qmrl" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:33:08 addons-419518 kubelet[1283]: I1009 18:33:08.941956    1283 scope.go:117] "RemoveContainer" containerID="3de8979472b9122c0f16ef668fea113211ef0d6d7a4db061ba8052a148f4f1e1"
	Oct 09 18:33:08 addons-419518 kubelet[1283]: I1009 18:33:08.951954    1283 scope.go:117] "RemoveContainer" containerID="18be03b0d42b2e3420f9e0c310192fb59c02f9ba23422eb116c57a974250a777"
	Oct 09 18:33:21 addons-419518 kubelet[1283]: I1009 18:33:21.946683    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qtz2j" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:33:38 addons-419518 kubelet[1283]: I1009 18:33:38.947750    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-vd6nz" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:34:04 addons-419518 kubelet[1283]: I1009 18:34:04.949196    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4qmrl" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:34:05 addons-419518 kubelet[1283]: I1009 18:34:05.946644    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d8wvd" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:34:05 addons-419518 kubelet[1283]: W1009 18:34:05.971498    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/crio-554e6d65f2c951abfae1b3a939a5e5309d75e04ac07bc2a4bff8a21bce7c9522 WatchSource:0}: Error finding container 554e6d65f2c951abfae1b3a939a5e5309d75e04ac07bc2a4bff8a21bce7c9522: Status 404 returned error can't find the container with id 554e6d65f2c951abfae1b3a939a5e5309d75e04ac07bc2a4bff8a21bce7c9522
	Oct 09 18:34:08 addons-419518 kubelet[1283]: I1009 18:34:08.617019    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d8wvd" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:34:08 addons-419518 kubelet[1283]: I1009 18:34:08.617085    1283 scope.go:117] "RemoveContainer" containerID="6fac5209501fd4a1bb0d7328835f083003f2cada64c11699217f3f8737be0406"
	Oct 09 18:34:08 addons-419518 kubelet[1283]: I1009 18:34:08.988653    1283 scope.go:117] "RemoveContainer" containerID="6fac5209501fd4a1bb0d7328835f083003f2cada64c11699217f3f8737be0406"
	Oct 09 18:34:09 addons-419518 kubelet[1283]: I1009 18:34:09.622981    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-d8wvd" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:34:09 addons-419518 kubelet[1283]: I1009 18:34:09.623040    1283 scope.go:117] "RemoveContainer" containerID="de8d4de4f53d3ebcce03c1a1d14231aec8e3011fe260ed15d8f0167fde85f2f3"
	Oct 09 18:34:09 addons-419518 kubelet[1283]: E1009 18:34:09.623202    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-d8wvd_kube-system(db9f2892-519c-4f26-9685-f2f98ea45002)\"" pod="kube-system/registry-creds-764b6fb674-d8wvd" podUID="db9f2892-519c-4f26-9685-f2f98ea45002"
	Oct 09 18:34:12 addons-419518 kubelet[1283]: I1009 18:34:12.276224    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rbf\" (UniqueName: \"kubernetes.io/projected/f5d7e6e6-7e31-44bc-9b67-efa211ec52ef-kube-api-access-z7rbf\") pod \"hello-world-app-5d498dc89-t7zkv\" (UID: \"f5d7e6e6-7e31-44bc-9b67-efa211ec52ef\") " pod="default/hello-world-app-5d498dc89-t7zkv"
	Oct 09 18:34:12 addons-419518 kubelet[1283]: I1009 18:34:12.276309    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5d7e6e6-7e31-44bc-9b67-efa211ec52ef-gcp-creds\") pod \"hello-world-app-5d498dc89-t7zkv\" (UID: \"f5d7e6e6-7e31-44bc-9b67-efa211ec52ef\") " pod="default/hello-world-app-5d498dc89-t7zkv"
	
	
	==> storage-provisioner [8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17] <==
	W1009 18:33:50.031763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:52.035318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:52.040179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:54.043507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:54.048311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:56.051265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:56.062177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:58.066193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:33:58.071469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:00.139335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:00.190507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:02.193642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:02.198688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:04.201724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:04.207142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:06.211711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:06.216936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:08.219968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:08.224571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:10.228354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:10.235134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:12.264278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:12.304626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:14.310247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:34:14.318309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-419518 -n addons-419518
helpers_test.go:269: (dbg) Run:  kubectl --context addons-419518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-419518 describe pod ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-419518 describe pod ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh: exit status 1 (124.414332ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bmpfw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rv5vh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-419518 describe pod ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (288.11579ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:34:15.711455  296739 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:34:15.712333  296739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:34:15.712349  296739 out.go:374] Setting ErrFile to fd 2...
	I1009 18:34:15.712356  296739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:34:15.712615  296739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:34:15.712932  296739 mustload.go:65] Loading cluster: addons-419518
	I1009 18:34:15.713357  296739 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:34:15.713377  296739 addons.go:606] checking whether the cluster is paused
	I1009 18:34:15.713486  296739 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:34:15.713509  296739 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:34:15.714019  296739 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:34:15.737648  296739 ssh_runner.go:195] Run: systemctl --version
	I1009 18:34:15.737717  296739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:34:15.763751  296739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:34:15.869974  296739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:34:15.870061  296739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:34:15.915248  296739 cri.go:89] found id: "de8d4de4f53d3ebcce03c1a1d14231aec8e3011fe260ed15d8f0167fde85f2f3"
	I1009 18:34:15.915279  296739 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:34:15.915285  296739 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:34:15.915288  296739 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:34:15.915291  296739 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:34:15.915295  296739 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:34:15.915302  296739 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:34:15.915305  296739 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:34:15.915315  296739 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:34:15.915324  296739 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:34:15.915328  296739 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:34:15.915330  296739 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:34:15.915334  296739 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:34:15.915336  296739 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:34:15.915339  296739 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:34:15.915344  296739 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:34:15.915347  296739 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:34:15.915350  296739 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:34:15.915354  296739 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:34:15.915357  296739 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:34:15.915361  296739 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:34:15.915364  296739 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:34:15.915367  296739 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:34:15.915370  296739 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:34:15.915373  296739 cri.go:89] found id: ""
	I1009 18:34:15.915432  296739 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:34:15.930671  296739 out.go:203] 
	W1009 18:34:15.933819  296739 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:34:15.933859  296739 out.go:285] * 
	* 
	W1009 18:34:15.940319  296739 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:34:15.943331  296739 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable ingress --alsologtostderr -v=1: exit status 11 (286.12171ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:34:16.008967  296784 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:34:16.009858  296784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:34:16.009936  296784 out.go:374] Setting ErrFile to fd 2...
	I1009 18:34:16.009971  296784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:34:16.010608  296784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:34:16.011206  296784 mustload.go:65] Loading cluster: addons-419518
	I1009 18:34:16.011715  296784 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:34:16.011765  296784 addons.go:606] checking whether the cluster is paused
	I1009 18:34:16.011918  296784 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:34:16.011960  296784 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:34:16.012464  296784 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:34:16.031174  296784 ssh_runner.go:195] Run: systemctl --version
	I1009 18:34:16.031235  296784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:34:16.050089  296784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:34:16.160790  296784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:34:16.160894  296784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:34:16.200260  296784 cri.go:89] found id: "de8d4de4f53d3ebcce03c1a1d14231aec8e3011fe260ed15d8f0167fde85f2f3"
	I1009 18:34:16.200328  296784 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:34:16.200350  296784 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:34:16.200374  296784 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:34:16.200407  296784 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:34:16.200433  296784 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:34:16.200456  296784 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:34:16.200478  296784 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:34:16.200497  296784 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:34:16.200533  296784 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:34:16.200553  296784 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:34:16.200575  296784 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:34:16.200607  296784 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:34:16.200629  296784 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:34:16.200649  296784 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:34:16.200676  296784 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:34:16.200728  296784 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:34:16.200755  296784 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:34:16.200778  296784 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:34:16.200801  296784 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:34:16.200836  296784 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:34:16.200860  296784 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:34:16.200882  296784 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:34:16.200905  296784 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:34:16.200939  296784 cri.go:89] found id: ""
	I1009 18:34:16.201023  296784 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:34:16.217537  296784 out.go:203] 
	W1009 18:34:16.220367  296784 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:34:16.220399  296784 out.go:285] * 
	* 
	W1009 18:34:16.226938  296784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:34:16.229916  296784 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-xxzz7" [93ebe4d3-4184-4136-aaa6-8160866ca032] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004367305s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (265.986303ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:31:50.113842  294302 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:31:50.114771  294302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:50.114824  294302 out.go:374] Setting ErrFile to fd 2...
	I1009 18:31:50.114849  294302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:50.115189  294302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:31:50.115573  294302 mustload.go:65] Loading cluster: addons-419518
	I1009 18:31:50.116043  294302 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:50.116092  294302 addons.go:606] checking whether the cluster is paused
	I1009 18:31:50.116228  294302 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:50.116273  294302 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:31:50.116815  294302 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:31:50.135304  294302 ssh_runner.go:195] Run: systemctl --version
	I1009 18:31:50.135370  294302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:31:50.153092  294302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:31:50.256886  294302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:31:50.257005  294302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:31:50.287462  294302 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:31:50.287492  294302 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:31:50.287498  294302 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:31:50.287502  294302 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:31:50.287506  294302 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:31:50.287518  294302 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:31:50.287522  294302 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:31:50.287526  294302 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:31:50.287529  294302 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:31:50.287537  294302 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:31:50.287545  294302 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:31:50.287549  294302 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:31:50.287555  294302 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:31:50.287558  294302 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:31:50.287562  294302 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:31:50.287572  294302 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:31:50.287580  294302 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:31:50.287585  294302 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:31:50.287589  294302 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:31:50.287592  294302 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:31:50.287598  294302 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:31:50.287601  294302 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:31:50.287604  294302 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:31:50.287607  294302 cri.go:89] found id: ""
	I1009 18:31:50.287667  294302 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:31:50.302726  294302 out.go:203] 
	W1009 18:31:50.305620  294302 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:31:50.305661  294302 out.go:285] * 
	* 
	W1009 18:31:50.312063  294302 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:31:50.314956  294302 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.405056ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00349915s
addons_test.go:463: (dbg) Run:  kubectl --context addons-419518 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (357.329818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:31:43.772129  294132 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:31:43.772978  294132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:43.773001  294132 out.go:374] Setting ErrFile to fd 2...
	I1009 18:31:43.773007  294132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:43.773308  294132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:31:43.773624  294132 mustload.go:65] Loading cluster: addons-419518
	I1009 18:31:43.774008  294132 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:43.774024  294132 addons.go:606] checking whether the cluster is paused
	I1009 18:31:43.774165  294132 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:43.774189  294132 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:31:43.782836  294132 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:31:43.813934  294132 ssh_runner.go:195] Run: systemctl --version
	I1009 18:31:43.813988  294132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:31:43.843952  294132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:31:43.965348  294132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:31:43.965455  294132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:31:43.998777  294132 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:31:43.998803  294132 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:31:43.998809  294132 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:31:43.998813  294132 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:31:43.998817  294132 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:31:43.998821  294132 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:31:43.998825  294132 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:31:43.998828  294132 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:31:43.998831  294132 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:31:43.998838  294132 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:31:43.998842  294132 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:31:43.998845  294132 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:31:43.998848  294132 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:31:43.998851  294132 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:31:43.998862  294132 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:31:43.998869  294132 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:31:43.998872  294132 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:31:43.998877  294132 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:31:43.998880  294132 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:31:43.998883  294132 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:31:43.998887  294132 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:31:43.998896  294132 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:31:43.998900  294132 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:31:43.998902  294132 cri.go:89] found id: ""
	I1009 18:31:43.998952  294132 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:31:44.024696  294132 out.go:203] 
	W1009 18:31:44.029793  294132 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:31:44.029834  294132 out.go:285] * 
	* 
	W1009 18:31:44.038265  294132 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:31:44.043165  294132 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 18:31:26.093749  286309 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 18:31:26.098181  286309 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:31:26.098209  286309 kapi.go:107] duration metric: took 4.476093ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.485405ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-419518 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/09 18:31:38 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-419518 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [84bae541-f68c-49cb-a84e-94350ade289c] Pending
helpers_test.go:352: "task-pv-pod" [84bae541-f68c-49cb-a84e-94350ade289c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [84bae541-f68c-49cb-a84e-94350ade289c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004873378s
addons_test.go:572: (dbg) Run:  kubectl --context addons-419518 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-419518 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-419518 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-419518 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-419518 delete pod task-pv-pod: (1.366122145s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-419518 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-419518 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-419518 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2eb3491f-b20b-43a5-9612-4895bea767a6] Pending
helpers_test.go:352: "task-pv-pod-restore" [2eb3491f-b20b-43a5-9612-4895bea767a6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2eb3491f-b20b-43a5-9612-4895bea767a6] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003954876s
addons_test.go:614: (dbg) Run:  kubectl --context addons-419518 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-419518 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-419518 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (260.620368ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:08.645855  294982 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:08.646589  294982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:08.646606  294982 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:08.646613  294982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:08.646924  294982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:08.647261  294982 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:08.647711  294982 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:08.647732  294982 addons.go:606] checking whether the cluster is paused
	I1009 18:32:08.647878  294982 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:08.647899  294982 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:08.648385  294982 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:08.668804  294982 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:08.668875  294982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:08.687438  294982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:08.788957  294982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:08.789095  294982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:08.818327  294982 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:08.818351  294982 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:08.818356  294982 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:08.818360  294982 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:08.818363  294982 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:08.818366  294982 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:08.818369  294982 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:08.818372  294982 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:08.818375  294982 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:08.818381  294982 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:08.818385  294982 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:08.818388  294982 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:08.818391  294982 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:08.818394  294982 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:08.818398  294982 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:08.818403  294982 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:08.818409  294982 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:08.818413  294982 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:08.818416  294982 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:08.818419  294982 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:08.818424  294982 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:08.818427  294982 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:08.818430  294982 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:08.818433  294982 cri.go:89] found id: ""
	I1009 18:32:08.818481  294982 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:08.833662  294982 out.go:203] 
	W1009 18:32:08.836644  294982 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:08.836726  294982 out.go:285] * 
	* 
	W1009 18:32:08.844424  294982 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:08.847313  294982 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (317.167564ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:08.940094  295024 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:08.941032  295024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:08.941048  295024 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:08.941054  295024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:08.941356  295024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:08.941711  295024 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:08.942099  295024 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:08.942122  295024 addons.go:606] checking whether the cluster is paused
	I1009 18:32:08.942281  295024 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:08.942301  295024 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:08.942811  295024 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:08.965781  295024 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:08.965850  295024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:08.984742  295024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:09.101100  295024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:09.101177  295024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:09.134772  295024 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:09.134795  295024 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:09.134800  295024 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:09.134806  295024 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:09.134809  295024 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:09.134813  295024 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:09.134820  295024 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:09.134824  295024 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:09.134827  295024 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:09.134833  295024 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:09.134836  295024 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:09.134840  295024 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:09.134843  295024 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:09.134846  295024 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:09.134850  295024 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:09.134855  295024 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:09.134862  295024 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:09.134866  295024 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:09.134869  295024 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:09.134872  295024 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:09.134881  295024 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:09.134891  295024 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:09.134894  295024 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:09.134897  295024 cri.go:89] found id: ""
	I1009 18:32:09.134949  295024 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:09.153158  295024 out.go:203] 
	W1009 18:32:09.156454  295024 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:09.156485  295024 out.go:285] * 
	* 
	W1009 18:32:09.162912  295024 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:09.166239  295024 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-419518 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-419518 --alsologtostderr -v=1: exit status 11 (299.464871ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:31:22.891051  293258 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:31:22.893597  293258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:22.893612  293258 out.go:374] Setting ErrFile to fd 2...
	I1009 18:31:22.893618  293258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:22.893916  293258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:31:22.894268  293258 mustload.go:65] Loading cluster: addons-419518
	I1009 18:31:22.894646  293258 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:22.894658  293258 addons.go:606] checking whether the cluster is paused
	I1009 18:31:22.894765  293258 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:22.894780  293258 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:31:22.895263  293258 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:31:22.934924  293258 ssh_runner.go:195] Run: systemctl --version
	I1009 18:31:22.934980  293258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:31:22.962300  293258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:31:23.068902  293258 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:31:23.068999  293258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:31:23.101630  293258 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:31:23.101653  293258 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:31:23.101659  293258 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:31:23.101663  293258 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:31:23.101667  293258 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:31:23.101671  293258 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:31:23.101675  293258 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:31:23.101678  293258 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:31:23.101682  293258 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:31:23.101688  293258 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:31:23.101691  293258 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:31:23.101694  293258 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:31:23.101697  293258 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:31:23.101701  293258 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:31:23.101704  293258 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:31:23.101711  293258 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:31:23.101761  293258 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:31:23.101773  293258 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:31:23.101777  293258 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:31:23.101780  293258 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:31:23.101786  293258 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:31:23.101789  293258 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:31:23.101792  293258 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:31:23.101795  293258 cri.go:89] found id: ""
	I1009 18:31:23.101853  293258 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:31:23.116777  293258 out.go:203] 
	W1009 18:31:23.119707  293258 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:31:23.119732  293258 out.go:285] * 
	* 
	W1009 18:31:23.126191  293258 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:31:23.129393  293258 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-419518 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-419518
helpers_test.go:243: (dbg) docker inspect addons-419518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321",
	        "Created": "2025-10-09T18:28:42.324058319Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:28:42.388519821Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/hostname",
	        "HostsPath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/hosts",
	        "LogPath": "/var/lib/docker/containers/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321-json.log",
	        "Name": "/addons-419518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-419518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-419518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321",
	                "LowerDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7484de4b20c4ac2fe3289c42975fdfd1e60d70a00b89c0a1484add136bc9aa43/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-419518",
	                "Source": "/var/lib/docker/volumes/addons-419518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-419518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-419518",
	                "name.minikube.sigs.k8s.io": "addons-419518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fc0cbe83a23ef0fe527d97f52e6000b554580b7bab280db2d5f49fb6bb2b55c",
	            "SandboxKey": "/var/run/docker/netns/8fc0cbe83a23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-419518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:b9:06:ae:9c:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0be5c0b9ee5b9c522294f1cb4a7d749e78a12a4263f461a27a66ca4494c30aa4",
	                    "EndpointID": "69fad6a6eebaff057d0c26eddb4dcf8abbccda04805e4585fdb55dbd7c187c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-419518",
	                        "56d0a47d6947"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-419518 -n addons-419518
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-419518 logs -n 25: (1.50730548s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-800425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-800425   │ jenkins │ v1.37.0 │ 09 Oct 25 18:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ delete  │ -p download-only-800425                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-800425   │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ start   │ -o=json --download-only -p download-only-958806 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-958806   │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ delete  │ -p download-only-958806                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-958806   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ delete  │ -p download-only-800425                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-800425   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ delete  │ -p download-only-958806                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-958806   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ --download-only -p download-docker-187653 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-187653 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ delete  │ -p download-docker-187653                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-187653 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ start   │ --download-only -p binary-mirror-572714 --alsologtostderr --binary-mirror http://127.0.0.1:37233 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-572714   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ delete  │ -p binary-mirror-572714                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-572714   │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p addons-419518                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-419518                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ start   │ -p addons-419518 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:31 UTC │
	│ addons  │ addons-419518 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ addons-419518 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-419518 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-419518          │ jenkins │ v1.37.0 │ 09 Oct 25 18:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:16.383310  287073 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:16.383978  287073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:16.383993  287073 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:16.383998  287073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:16.384287  287073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:28:16.384772  287073 out.go:368] Setting JSON to false
	I1009 18:28:16.385563  287073 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4248,"bootTime":1760030249,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:28:16.385629  287073 start.go:141] virtualization:  
	I1009 18:28:16.389206  287073 out.go:179] * [addons-419518] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 18:28:16.393010  287073 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:16.393112  287073 notify.go:220] Checking for updates...
	I1009 18:28:16.398930  287073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:16.401830  287073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:28:16.404810  287073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:28:16.407697  287073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:28:16.410525  287073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:16.413499  287073 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:16.434348  287073 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:28:16.434485  287073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:16.494776  287073 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 18:28:16.486052862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:28:16.494878  287073 docker.go:318] overlay module found
	I1009 18:28:16.497866  287073 out.go:179] * Using the docker driver based on user configuration
	I1009 18:28:16.500723  287073 start.go:305] selected driver: docker
	I1009 18:28:16.500745  287073 start.go:925] validating driver "docker" against <nil>
	I1009 18:28:16.500759  287073 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:16.501469  287073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:16.557338  287073 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 18:28:16.548271133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:28:16.557506  287073 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:28:16.557730  287073 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:28:16.560557  287073 out.go:179] * Using Docker driver with root privileges
	I1009 18:28:16.563357  287073 cni.go:84] Creating CNI manager for ""
	I1009 18:28:16.563425  287073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:16.563433  287073 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:28:16.563526  287073 start.go:349] cluster config:
	{Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 18:28:16.568454  287073 out.go:179] * Starting "addons-419518" primary control-plane node in "addons-419518" cluster
	I1009 18:28:16.571245  287073 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:28:16.574115  287073 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:16.576915  287073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:16.576970  287073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:28:16.576983  287073 cache.go:64] Caching tarball of preloaded images
	I1009 18:28:16.576998  287073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:16.577080  287073 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 18:28:16.577095  287073 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:28:16.577424  287073 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/config.json ...
	I1009 18:28:16.577455  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/config.json: {Name:mk38bba8b563021566f9112ebaf96251a12ac9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:16.592694  287073 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:28:16.592843  287073 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:28:16.592863  287073 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 18:28:16.592868  287073 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 18:28:16.592875  287073 cache.go:165] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 18:28:16.592881  287073 cache.go:175] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1009 18:28:34.718653  287073 cache.go:177] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1009 18:28:34.718692  287073 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:34.718722  287073 start.go:360] acquireMachinesLock for addons-419518: {Name:mk799c7ee93ae50f3bf399d14394c57303eda19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:34.719454  287073 start.go:364] duration metric: took 694.245µs to acquireMachinesLock for "addons-419518"
	I1009 18:28:34.719490  287073 start.go:93] Provisioning new machine with config: &{Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:28:34.719560  287073 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:28:34.722955  287073 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 18:28:34.723189  287073 start.go:159] libmachine.API.Create for "addons-419518" (driver="docker")
	I1009 18:28:34.723234  287073 client.go:168] LocalClient.Create starting
	I1009 18:28:34.723347  287073 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 18:28:35.262299  287073 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 18:28:35.604897  287073 cli_runner.go:164] Run: docker network inspect addons-419518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:28:35.620494  287073 cli_runner.go:211] docker network inspect addons-419518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:28:35.620588  287073 network_create.go:284] running [docker network inspect addons-419518] to gather additional debugging logs...
	I1009 18:28:35.620611  287073 cli_runner.go:164] Run: docker network inspect addons-419518
	W1009 18:28:35.636031  287073 cli_runner.go:211] docker network inspect addons-419518 returned with exit code 1
	I1009 18:28:35.636064  287073 network_create.go:287] error running [docker network inspect addons-419518]: docker network inspect addons-419518: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-419518 not found
	I1009 18:28:35.636079  287073 network_create.go:289] output of [docker network inspect addons-419518]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-419518 not found
	
	** /stderr **
	I1009 18:28:35.636200  287073 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:35.652176  287073 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05d40}
	I1009 18:28:35.652216  287073 network_create.go:124] attempt to create docker network addons-419518 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:28:35.652290  287073 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-419518 addons-419518
	I1009 18:28:35.709367  287073 network_create.go:108] docker network addons-419518 192.168.49.0/24 created
	I1009 18:28:35.709403  287073 kic.go:121] calculated static IP "192.168.49.2" for the "addons-419518" container
	I1009 18:28:35.709491  287073 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:28:35.725681  287073 cli_runner.go:164] Run: docker volume create addons-419518 --label name.minikube.sigs.k8s.io=addons-419518 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:28:35.743044  287073 oci.go:103] Successfully created a docker volume addons-419518
	I1009 18:28:35.743156  287073 cli_runner.go:164] Run: docker run --rm --name addons-419518-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-419518 --entrypoint /usr/bin/test -v addons-419518:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:28:37.813042  287073 cli_runner.go:217] Completed: docker run --rm --name addons-419518-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-419518 --entrypoint /usr/bin/test -v addons-419518:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.06983909s)
	I1009 18:28:37.813071  287073 oci.go:107] Successfully prepared a docker volume addons-419518
	I1009 18:28:37.813116  287073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:37.813127  287073 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:28:37.813186  287073 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-419518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:28:42.249971  287073 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-419518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.436740581s)
	I1009 18:28:42.250009  287073 kic.go:203] duration metric: took 4.436877352s to extract preloaded images to volume ...
	W1009 18:28:42.250193  287073 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:28:42.250330  287073 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:28:42.309082  287073 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-419518 --name addons-419518 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-419518 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-419518 --network addons-419518 --ip 192.168.49.2 --volume addons-419518:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:28:42.615020  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Running}}
	I1009 18:28:42.637639  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:28:42.664945  287073 cli_runner.go:164] Run: docker exec addons-419518 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:28:42.723008  287073 oci.go:144] the created container "addons-419518" has a running status.
	I1009 18:28:42.723034  287073 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa...
	I1009 18:28:43.255817  287073 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:28:43.276303  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:28:43.293575  287073 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:28:43.293598  287073 kic_runner.go:114] Args: [docker exec --privileged addons-419518 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:28:43.333460  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:28:43.350688  287073 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:43.350807  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:43.367047  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:43.367375  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:43.367390  287073 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:43.367961  287073 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36950->127.0.0.1:33140: read: connection reset by peer
	I1009 18:28:46.513684  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-419518
	
	I1009 18:28:46.513710  287073 ubuntu.go:182] provisioning hostname "addons-419518"
	I1009 18:28:46.513784  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:46.531372  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:46.531677  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:46.531700  287073 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-419518 && echo "addons-419518" | sudo tee /etc/hostname
	I1009 18:28:46.682750  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-419518
	
	I1009 18:28:46.682829  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:46.700476  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:46.700790  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:46.700812  287073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-419518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-419518/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-419518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:46.850553  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:46.850643  287073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 18:28:46.850702  287073 ubuntu.go:190] setting up certificates
	I1009 18:28:46.850739  287073 provision.go:84] configureAuth start
	I1009 18:28:46.850832  287073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-419518
	I1009 18:28:46.867008  287073 provision.go:143] copyHostCerts
	I1009 18:28:46.867095  287073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 18:28:46.867217  287073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 18:28:46.867272  287073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 18:28:46.867315  287073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.addons-419518 san=[127.0.0.1 192.168.49.2 addons-419518 localhost minikube]
	I1009 18:28:47.225390  287073 provision.go:177] copyRemoteCerts
	I1009 18:28:47.225466  287073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:47.225536  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.243705  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.346245  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:28:47.364220  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:28:47.380670  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:28:47.397726  287073 provision.go:87] duration metric: took 546.958716ms to configureAuth
	I1009 18:28:47.397751  287073 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:47.397960  287073 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:47.398072  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.416549  287073 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:47.416853  287073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1009 18:28:47.416868  287073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:28:47.670421  287073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:28:47.670538  287073 machine.go:96] duration metric: took 4.319827498s to provisionDockerMachine
	I1009 18:28:47.670607  287073 client.go:171] duration metric: took 12.947333618s to LocalClient.Create
	I1009 18:28:47.670654  287073 start.go:167] duration metric: took 12.947463956s to libmachine.API.Create "addons-419518"
	I1009 18:28:47.670685  287073 start.go:293] postStartSetup for "addons-419518" (driver="docker")
	I1009 18:28:47.670727  287073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:47.670820  287073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:47.670953  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.689525  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.798162  287073 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:47.801282  287073 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:47.801312  287073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:47.801324  287073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 18:28:47.801389  287073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 18:28:47.801415  287073 start.go:296] duration metric: took 130.694173ms for postStartSetup
	I1009 18:28:47.801719  287073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-419518
	I1009 18:28:47.817688  287073 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/config.json ...
	I1009 18:28:47.817977  287073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:47.818026  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.834293  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.930862  287073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:47.935499  287073 start.go:128] duration metric: took 13.215925289s to createHost
	I1009 18:28:47.935521  287073 start.go:83] releasing machines lock for "addons-419518", held for 13.21605045s
	I1009 18:28:47.935609  287073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-419518
	I1009 18:28:47.951558  287073 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:47.951578  287073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:47.951610  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.951637  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:28:47.968735  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:47.975780  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:28:48.166035  287073 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:48.172300  287073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:28:48.208833  287073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:48.213094  287073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:48.213178  287073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:48.241954  287073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 18:28:48.241979  287073 start.go:495] detecting cgroup driver to use...
	I1009 18:28:48.242023  287073 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:28:48.242089  287073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:28:48.259362  287073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:28:48.271911  287073 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:48.271999  287073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:48.289396  287073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:48.308160  287073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:48.422068  287073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:48.543662  287073 docker.go:234] disabling docker service ...
	I1009 18:28:48.543731  287073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:48.563889  287073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:48.576745  287073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:48.684979  287073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:48.802459  287073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:48.815400  287073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:48.829902  287073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:28:48.829982  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.838907  287073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:28:48.838984  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.847775  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.856619  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.865496  287073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:48.873358  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.882955  287073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.896264  287073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:48.904917  287073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:48.912659  287073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:48.919897  287073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:49.034967  287073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:28:49.173667  287073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:28:49.173818  287073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:28:49.177447  287073 start.go:563] Will wait 60s for crictl version
	I1009 18:28:49.177557  287073 ssh_runner.go:195] Run: which crictl
	I1009 18:28:49.181037  287073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:49.210098  287073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:28:49.210307  287073 ssh_runner.go:195] Run: crio --version
	I1009 18:28:49.237218  287073 ssh_runner.go:195] Run: crio --version
	I1009 18:28:49.270591  287073 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:28:49.273464  287073 cli_runner.go:164] Run: docker network inspect addons-419518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:49.289001  287073 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:49.292769  287073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:49.302073  287073 kubeadm.go:883] updating cluster {Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:49.302229  287073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:49.302283  287073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:49.339339  287073 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:49.339363  287073 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:28:49.339418  287073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:49.368132  287073 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:49.368156  287073 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:49.368165  287073 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:28:49.368249  287073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-419518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:49.368337  287073 ssh_runner.go:195] Run: crio config
	I1009 18:28:49.440594  287073 cni.go:84] Creating CNI manager for ""
	I1009 18:28:49.440618  287073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:49.440638  287073 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:49.440662  287073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-419518 NodeName:addons-419518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:49.440790  287073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-419518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:49.440867  287073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:28:49.448483  287073 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:49.448596  287073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:49.455882  287073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:28:49.468592  287073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:49.480918  287073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1009 18:28:49.492979  287073 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:49.496336  287073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:28:49.505609  287073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:49.616678  287073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:49.632180  287073 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518 for IP: 192.168.49.2
	I1009 18:28:49.632203  287073 certs.go:195] generating shared ca certs ...
	I1009 18:28:49.632221  287073 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:49.632352  287073 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 18:28:49.786119  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt ...
	I1009 18:28:49.786153  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt: {Name:mk1860adab5beccf33a1f32dfcd270757df005b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:49.786367  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key ...
	I1009 18:28:49.786382  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key: {Name:mk3320be062f4dee91fc84c7f329a34d237b7502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:49.787116  287073 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 18:28:50.399460  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt ...
	I1009 18:28:50.399491  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt: {Name:mk26acd207efcd41f9412775a1e0407b14d413d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:50.400257  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key ...
	I1009 18:28:50.400279  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key: {Name:mkc06cb89887ad60290183cd7568aaa19cef53d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:50.400365  287073 certs.go:257] generating profile certs ...
	I1009 18:28:50.400426  287073 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.key
	I1009 18:28:50.400445  287073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt with IP's: []
	I1009 18:28:51.015473  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt ...
	I1009 18:28:51.015509  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: {Name:mk78dfa52f9042240dcabd55167ef3c11cf2e69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.015726  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.key ...
	I1009 18:28:51.015746  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.key: {Name:mkb16c588a54c1c2ed524db38307aaab1a59b1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.016468  287073 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c
	I1009 18:28:51.016495  287073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:28:51.750977  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c ...
	I1009 18:28:51.751009  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c: {Name:mkc7c9dd7e400ec5f1b2f053bc73347849651ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.751200  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c ...
	I1009 18:28:51.751214  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c: {Name:mk8e7f48ee0436fbe12d13f9bfc9c29d4e972878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:51.751298  287073 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt.f0bdca2c -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt
	I1009 18:28:51.751377  287073 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key.f0bdca2c -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key
	I1009 18:28:51.751423  287073 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key
	I1009 18:28:51.751446  287073 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt with IP's: []
	I1009 18:28:53.143796  287073 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt ...
	I1009 18:28:53.143827  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt: {Name:mka99b3fbd9ad4dfe6aa98d60282e420743894b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:53.144694  287073 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key ...
	I1009 18:28:53.144713  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key: {Name:mkeba52bb76e31f0edf7518f31c096524489007f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:53.144909  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:53.144952  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:28:53.144983  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:53.145011  287073 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 18:28:53.145584  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:53.164068  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:28:53.181369  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:53.198620  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:53.215940  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:28:53.233819  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:28:53.251298  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:53.268803  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:28:53.287014  287073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:53.304490  287073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:53.317015  287073 ssh_runner.go:195] Run: openssl version
	I1009 18:28:53.323339  287073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:53.331693  287073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:53.335290  287073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:53.335361  287073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:53.378772  287073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:53.388206  287073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:53.393070  287073 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:28:53.393171  287073 kubeadm.go:400] StartCluster: {Name:addons-419518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-419518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:53.393305  287073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:53.393417  287073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:53.427736  287073 cri.go:89] found id: ""
	I1009 18:28:53.427889  287073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:53.439190  287073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:53.448944  287073 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:28:53.449015  287073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:53.457463  287073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:28:53.457496  287073 kubeadm.go:157] found existing configuration files:
	
	I1009 18:28:53.457588  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:28:53.466258  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:28:53.466324  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:28:53.473783  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:28:53.481847  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:28:53.481917  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:53.490087  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:28:53.498107  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:28:53.498267  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:53.505788  287073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:28:53.513568  287073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:28:53.513704  287073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:53.521394  287073 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:28:53.583888  287073 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 18:28:53.584142  287073 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 18:28:53.647634  287073 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:29:09.520579  287073 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:29:09.520655  287073 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:29:09.520781  287073 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:29:09.520871  287073 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 18:29:09.520914  287073 kubeadm.go:318] OS: Linux
	I1009 18:29:09.520974  287073 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:29:09.521033  287073 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 18:29:09.521084  287073 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:29:09.521145  287073 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:29:09.521214  287073 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:29:09.521279  287073 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:29:09.521338  287073 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:29:09.521392  287073 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:29:09.521445  287073 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 18:29:09.521532  287073 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:29:09.521634  287073 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:29:09.521751  287073 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:29:09.521820  287073 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:29:09.524797  287073 out.go:252]   - Generating certificates and keys ...
	I1009 18:29:09.524900  287073 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:29:09.525006  287073 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:29:09.525108  287073 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:29:09.525192  287073 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:29:09.525258  287073 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:29:09.525313  287073 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:29:09.525378  287073 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:29:09.525500  287073 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-419518 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:29:09.525556  287073 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:29:09.525674  287073 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-419518 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:29:09.525743  287073 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:29:09.525811  287073 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:29:09.525862  287073 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:29:09.525922  287073 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:29:09.525995  287073 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:29:09.526057  287073 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:29:09.526116  287073 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:29:09.526208  287073 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:29:09.526268  287073 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:29:09.526352  287073 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:29:09.526421  287073 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:29:09.529460  287073 out.go:252]   - Booting up control plane ...
	I1009 18:29:09.529590  287073 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:29:09.529682  287073 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:29:09.529774  287073 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:29:09.529902  287073 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:29:09.529997  287073 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:29:09.530102  287073 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:29:09.530212  287073 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:29:09.530305  287073 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:29:09.530450  287073 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:29:09.530560  287073 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:29:09.530624  287073 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.006155286s
	I1009 18:29:09.530725  287073 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:29:09.530810  287073 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:29:09.530902  287073 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:29:09.530984  287073 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:29:09.531062  287073 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.740782996s
	I1009 18:29:09.531132  287073 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.959780643s
	I1009 18:29:09.531202  287073 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502154429s
	I1009 18:29:09.531309  287073 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:29:09.531436  287073 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:29:09.531497  287073 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:29:09.531700  287073 kubeadm.go:318] [mark-control-plane] Marking the node addons-419518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:29:09.531760  287073 kubeadm.go:318] [bootstrap-token] Using token: oq7qdz.vhp4g7s58eo9w6q7
	I1009 18:29:09.534726  287073 out.go:252]   - Configuring RBAC rules ...
	I1009 18:29:09.534874  287073 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:29:09.534998  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:29:09.535188  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:29:09.535349  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:29:09.535484  287073 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:29:09.535588  287073 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:29:09.535753  287073 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:29:09.535812  287073 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 18:29:09.535887  287073 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 18:29:09.535900  287073 kubeadm.go:318] 
	I1009 18:29:09.535975  287073 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 18:29:09.535983  287073 kubeadm.go:318] 
	I1009 18:29:09.536070  287073 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 18:29:09.536078  287073 kubeadm.go:318] 
	I1009 18:29:09.536106  287073 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 18:29:09.536183  287073 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:29:09.536242  287073 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:29:09.536250  287073 kubeadm.go:318] 
	I1009 18:29:09.536321  287073 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 18:29:09.536352  287073 kubeadm.go:318] 
	I1009 18:29:09.536419  287073 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:29:09.536458  287073 kubeadm.go:318] 
	I1009 18:29:09.536535  287073 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 18:29:09.536641  287073 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:29:09.536755  287073 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:29:09.536779  287073 kubeadm.go:318] 
	I1009 18:29:09.536915  287073 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:29:09.537042  287073 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 18:29:09.537050  287073 kubeadm.go:318] 
	I1009 18:29:09.537147  287073 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token oq7qdz.vhp4g7s58eo9w6q7 \
	I1009 18:29:09.537264  287073 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 18:29:09.537286  287073 kubeadm.go:318] 	--control-plane 
	I1009 18:29:09.537291  287073 kubeadm.go:318] 
	I1009 18:29:09.537387  287073 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:29:09.537392  287073 kubeadm.go:318] 
	I1009 18:29:09.537485  287073 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oq7qdz.vhp4g7s58eo9w6q7 \
	I1009 18:29:09.537613  287073 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 18:29:09.537622  287073 cni.go:84] Creating CNI manager for ""
	I1009 18:29:09.537629  287073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:29:09.540699  287073 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 18:29:09.543774  287073 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:29:09.547895  287073 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 18:29:09.547968  287073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:29:09.560979  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:29:09.853183  287073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:29:09.853382  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:09.853464  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-419518 minikube.k8s.io/updated_at=2025_10_09T18_29_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=addons-419518 minikube.k8s.io/primary=true
	I1009 18:29:10.027144  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:10.027225  287073 ops.go:34] apiserver oom_adj: -16
	I1009 18:29:10.528085  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:11.027672  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:11.527259  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:12.027930  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:12.528214  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:13.027912  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:13.528126  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:14.027429  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:14.527264  287073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:29:14.693411  287073 kubeadm.go:1113] duration metric: took 4.840074954s to wait for elevateKubeSystemPrivileges
	I1009 18:29:14.693456  287073 kubeadm.go:402] duration metric: took 21.300290128s to StartCluster
	I1009 18:29:14.693482  287073 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:29:14.693613  287073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:29:14.694436  287073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:29:14.695088  287073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:29:14.696090  287073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:29:14.696449  287073 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:29:14.696512  287073 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:29:14.696682  287073 addons.go:69] Setting yakd=true in profile "addons-419518"
	I1009 18:29:14.696703  287073 addons.go:238] Setting addon yakd=true in "addons-419518"
	I1009 18:29:14.696738  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.697414  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.699870  287073 addons.go:69] Setting metrics-server=true in profile "addons-419518"
	I1009 18:29:14.699893  287073 addons.go:238] Setting addon metrics-server=true in "addons-419518"
	I1009 18:29:14.699920  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.700441  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.701253  287073 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-419518"
	I1009 18:29:14.701347  287073 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-419518"
	I1009 18:29:14.701423  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.705638  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.711108  287073 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-419518"
	I1009 18:29:14.711145  287073 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-419518"
	I1009 18:29:14.711180  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.711733  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.719940  287073 addons.go:69] Setting cloud-spanner=true in profile "addons-419518"
	I1009 18:29:14.719973  287073 addons.go:238] Setting addon cloud-spanner=true in "addons-419518"
	I1009 18:29:14.720008  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.721574  287073 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-419518"
	I1009 18:29:14.721633  287073 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-419518"
	I1009 18:29:14.721658  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.722337  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.730759  287073 addons.go:69] Setting default-storageclass=true in profile "addons-419518"
	I1009 18:29:14.730769  287073 addons.go:69] Setting registry=true in profile "addons-419518"
	I1009 18:29:14.730792  287073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-419518"
	I1009 18:29:14.730799  287073 addons.go:238] Setting addon registry=true in "addons-419518"
	I1009 18:29:14.730840  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.731113  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.731324  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.743383  287073 addons.go:69] Setting registry-creds=true in profile "addons-419518"
	I1009 18:29:14.743437  287073 addons.go:238] Setting addon registry-creds=true in "addons-419518"
	I1009 18:29:14.743482  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.750064  287073 addons.go:69] Setting gcp-auth=true in profile "addons-419518"
	I1009 18:29:14.750107  287073 mustload.go:65] Loading cluster: addons-419518
	I1009 18:29:14.750511  287073 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:29:14.750785  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.751204  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.774347  287073 addons.go:69] Setting storage-provisioner=true in profile "addons-419518"
	I1009 18:29:14.774394  287073 addons.go:238] Setting addon storage-provisioner=true in "addons-419518"
	I1009 18:29:14.774435  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.775018  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.783593  287073 addons.go:69] Setting ingress=true in profile "addons-419518"
	I1009 18:29:14.783714  287073 addons.go:238] Setting addon ingress=true in "addons-419518"
	I1009 18:29:14.783817  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.784500  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.808583  287073 addons.go:69] Setting ingress-dns=true in profile "addons-419518"
	I1009 18:29:14.808673  287073 addons.go:238] Setting addon ingress-dns=true in "addons-419518"
	I1009 18:29:14.808758  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.809357  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.810121  287073 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-419518"
	I1009 18:29:14.810199  287073 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-419518"
	I1009 18:29:14.810724  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.841999  287073 addons.go:69] Setting inspektor-gadget=true in profile "addons-419518"
	I1009 18:29:14.842097  287073 addons.go:238] Setting addon inspektor-gadget=true in "addons-419518"
	I1009 18:29:14.842196  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.842920  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.843687  287073 addons.go:69] Setting volcano=true in profile "addons-419518"
	I1009 18:29:14.843769  287073 addons.go:238] Setting addon volcano=true in "addons-419518"
	I1009 18:29:14.843893  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.844881  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.865953  287073 out.go:179] * Verifying Kubernetes components...
	I1009 18:29:14.866344  287073 addons.go:69] Setting volumesnapshots=true in profile "addons-419518"
	I1009 18:29:14.866374  287073 addons.go:238] Setting addon volumesnapshots=true in "addons-419518"
	I1009 18:29:14.866414  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.866979  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.871792  287073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:29:14.881300  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.944933  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:29:14.951319  287073 addons.go:238] Setting addon default-storageclass=true in "addons-419518"
	I1009 18:29:14.951357  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:14.951872  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:14.996556  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:29:15.001421  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:29:15.004420  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1009 18:29:15.004579  287073 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:29:15.008285  287073 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:29:15.008321  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:29:15.008404  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.053184  287073 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-419518"
	I1009 18:29:15.053232  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:15.053679  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:15.076550  287073 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1009 18:29:15.076783  287073 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1009 18:29:15.091467  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:29:15.091500  287073 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:29:15.091582  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.104535  287073 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1009 18:29:15.117249  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:15.119263  287073 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 18:29:15.119291  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1009 18:29:15.119402  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.130392  287073 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1009 18:29:15.155406  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:29:15.155478  287073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:29:15.155591  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.181069  287073 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:29:15.181153  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:29:15.181276  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	W1009 18:29:15.202330  287073 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:29:15.202813  287073 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1009 18:29:15.204238  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:29:15.206095  287073 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:29:15.206117  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1009 18:29:15.206302  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.212815  287073 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:29:15.231871  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:29:15.234916  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:29:15.238113  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:29:15.240723  287073 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1009 18:29:15.240811  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:29:15.245140  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:29:15.248272  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:29:15.248285  287073 out.go:179]   - Using image docker.io/registry:3.0.0
	I1009 18:29:15.248273  287073 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 18:29:15.248416  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:29:15.248427  287073 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:29:15.248501  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.264811  287073 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:29:15.264831  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:29:15.264903  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.248305  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1009 18:29:15.266622  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.290428  287073 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1009 18:29:15.248311  287073 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1009 18:29:15.293501  287073 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:29:15.293516  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:29:15.293584  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.300014  287073 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:29:15.300042  287073 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1009 18:29:15.300108  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.314058  287073 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:29:15.248315  287073 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:29:15.317168  287073 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:29:15.317191  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:29:15.317261  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.320325  287073 out.go:179]   - Using image docker.io/busybox:stable
	I1009 18:29:15.323253  287073 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:29:15.323275  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:29:15.323347  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.350687  287073 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:29:15.353486  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:29:15.353523  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:29:15.353593  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.376175  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.377343  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.378075  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.380070  287073 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:29:15.380085  287073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:29:15.380178  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:15.383583  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.410768  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.416195  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.417498  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.463441  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.486304  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.492407  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.504855  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.510237  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.526395  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.541941  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:15.551380  287073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:29:15.551841  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:16.065175  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:29:16.091079  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 18:29:16.094472  287073 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:29:16.094544  287073 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:29:16.108747  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:29:16.108817  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:29:16.118628  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:29:16.139666  287073 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:29:16.139738  287073 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:29:16.141718  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:29:16.156979  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 18:29:16.161144  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:29:16.181333  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:29:16.190847  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:29:16.196619  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:29:16.198349  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:29:16.198418  287073 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:29:16.202978  287073 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:29:16.203048  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:29:16.231143  287073 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:16.231214  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1009 18:29:16.236664  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:29:16.236728  287073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:29:16.245661  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:29:16.245739  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:29:16.286459  287073 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:29:16.286529  287073 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:29:16.336654  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:29:16.337994  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:29:16.338052  287073 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:29:16.349772  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:29:16.349842  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:29:16.388923  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:16.428044  287073 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:29:16.428126  287073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:29:16.450649  287073 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:29:16.450713  287073 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:29:16.493683  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:29:16.493754  287073 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:29:16.542580  287073 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.329733513s)
	I1009 18:29:16.543476  287073 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:29:16.543443  287073 node_ready.go:35] waiting up to 6m0s for node "addons-419518" to be "Ready" ...
	I1009 18:29:16.562672  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:29:16.562744  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:29:16.609240  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:29:16.695644  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:29:16.695711  287073 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:29:16.697980  287073 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:29:16.698045  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:29:16.743636  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:29:16.743708  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:29:16.856299  287073 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:29:16.856366  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:29:16.943239  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:29:16.973535  287073 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:29:16.973607  287073 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:29:17.048061  287073 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-419518" context rescaled to 1 replicas
	I1009 18:29:17.058910  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:29:17.176166  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:29:17.176194  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:29:17.407680  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:29:17.407753  287073 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:29:17.434293  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.369041906s)
	I1009 18:29:17.434570  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.34342767s)
	I1009 18:29:17.589427  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:29:17.589451  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:29:17.752418  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:29:17.752447  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:29:18.030200  287073 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:29:18.030228  287073 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:29:18.279354  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1009 18:29:18.629317  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:19.315739  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.197029353s)
	I1009 18:29:20.706212  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.56441406s)
	I1009 18:29:20.706242  287073 addons.go:479] Verifying addon ingress=true in "addons-419518"
	I1009 18:29:20.706604  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.549551742s)
	I1009 18:29:20.706757  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.545541377s)
	I1009 18:29:20.706847  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.525439508s)
	I1009 18:29:20.706901  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.515985161s)
	I1009 18:29:20.706937  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.510258174s)
	I1009 18:29:20.706980  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.370254508s)
	I1009 18:29:20.706991  287073 addons.go:479] Verifying addon registry=true in "addons-419518"
	I1009 18:29:20.707050  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.318044923s)
	W1009 18:29:20.707071  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:20.707090  287073 retry.go:31] will retry after 275.330326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:20.707242  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.097928138s)
	I1009 18:29:20.707308  287073 addons.go:479] Verifying addon metrics-server=true in "addons-419518"
	I1009 18:29:20.707567  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.648625719s)
	W1009 18:29:20.707592  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:29:20.707607  287073 retry.go:31] will retry after 325.742627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:29:20.707694  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.764089943s)
	I1009 18:29:20.711604  287073 out.go:179] * Verifying ingress addon...
	I1009 18:29:20.711616  287073 out.go:179] * Verifying registry addon...
	I1009 18:29:20.713519  287073 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-419518 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:29:20.717024  287073 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:29:20.717888  287073 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:29:20.726513  287073 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:29:20.726534  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:20.731187  287073 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:29:20.731207  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:20.983196  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:21.033657  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1009 18:29:21.053664  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:21.234395  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.954992453s)
	I1009 18:29:21.234504  287073 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-419518"
	I1009 18:29:21.239605  287073 out.go:179] * Verifying csi-hostpath-driver addon...
	I1009 18:29:21.240334  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:21.241054  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:21.243299  287073 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:29:21.282209  287073 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:29:21.282233  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:21.722442  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:21.722610  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:21.822259  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:22.058681  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.075392156s)
	W1009 18:29:22.058733  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:22.058753  287073 retry.go:31] will retry after 411.018462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:22.221078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:22.221134  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:22.246984  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:22.470811  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:22.722205  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:22.722574  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:22.746895  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:22.808617  287073 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:29:22.808717  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:22.839223  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:22.972813  287073 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:29:22.994056  287073 addons.go:238] Setting addon gcp-auth=true in "addons-419518"
	I1009 18:29:22.994103  287073 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:29:22.994582  287073 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:29:23.023360  287073 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:29:23.023418  287073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:29:23.048449  287073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:29:23.222070  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:23.222818  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:23.247165  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:23.547137  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:23.721516  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:23.722247  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:23.748983  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:23.991210  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.957512159s)
	I1009 18:29:23.991322  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.520469268s)
	W1009 18:29:23.991577  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:23.991605  287073 retry.go:31] will retry after 503.16713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:23.994483  287073 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:29:23.997485  287073 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1009 18:29:24.000296  287073 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:29:24.000325  287073 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:29:24.014714  287073 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:29:24.014748  287073 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:29:24.029786  287073 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:29:24.029812  287073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:29:24.044268  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:29:24.222403  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:24.223076  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:24.247596  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:24.495422  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:24.569499  287073 addons.go:479] Verifying addon gcp-auth=true in "addons-419518"
	I1009 18:29:24.572904  287073 out.go:179] * Verifying gcp-auth addon...
	I1009 18:29:24.576571  287073 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:29:24.590148  287073 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:29:24.590225  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:24.722030  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:24.722622  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:24.746846  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:25.082065  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:25.221446  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:25.221782  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:25.246555  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:25.340200  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:25.340229  287073 retry.go:31] will retry after 1.124225061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:29:25.547274  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:25.580320  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:25.720314  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:25.721535  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:25.746402  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:26.080659  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:26.220906  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:26.221588  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:26.246890  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:26.465296  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:26.579900  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:26.720643  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:26.721882  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:26.747445  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:27.080179  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:27.225131  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:27.225906  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:27.250162  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:27.277331  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:27.277363  287073 retry.go:31] will retry after 1.152654696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:29:27.547720  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:27.580756  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:27.721255  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:27.721431  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:27.747372  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:28.080834  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:28.221292  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:28.221416  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:28.246147  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:28.430297  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:28.580555  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:28.722990  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:28.723074  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:28.747846  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:29.080803  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:29.221489  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:29.222963  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:29.247375  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:29.277038  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:29.277071  287073 retry.go:31] will retry after 1.406240229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:29.586491  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:29.720613  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:29.721352  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:29.747076  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:30.048808  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:30.081398  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:30.220490  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:30.221220  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:30.247130  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:30.582458  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:30.683831  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:30.721875  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:30.722808  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:30.753951  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:31.080077  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:31.221570  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:31.221913  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:31.247379  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:31.520125  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:31.520158  287073 retry.go:31] will retry after 1.978715696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:31.579771  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:31.721180  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:31.721299  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:31.746988  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:32.082345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:32.220613  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:32.221188  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:32.247022  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:32.548294  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:32.586483  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:32.721715  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:32.722298  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:32.746255  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:33.079751  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:33.221169  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:33.221672  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:33.246561  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:33.499820  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:33.579913  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:33.719575  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:33.721581  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:33.747005  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:34.080562  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:34.220785  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:34.222616  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:34.246947  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:34.296830  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:34.296868  287073 retry.go:31] will retry after 5.768921432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:34.579790  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:34.720770  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:34.720966  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:34.746779  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:35.048017  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:35.081229  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:35.220418  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:35.221308  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:35.246314  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:35.581514  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:35.720433  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:35.720786  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:35.748469  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:36.081641  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:36.220947  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:36.221063  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:36.247152  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:36.583322  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:36.720147  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:36.721523  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:36.746619  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:37.080388  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:37.220452  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:37.221466  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:37.246813  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:37.547719  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:37.580941  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:37.721642  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:37.721781  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:37.746837  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:38.081418  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:38.221329  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:38.221430  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:38.246443  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:38.580158  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:38.721717  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:38.721949  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:38.746624  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:39.079713  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:39.221129  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:39.221188  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:39.246666  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:39.580526  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:39.721049  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:39.721213  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:39.747013  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:40.048348  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:40.066699  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:40.081043  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:40.221137  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:40.222610  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:40.246870  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:40.580187  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:40.722367  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:40.723093  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:40.746877  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:40.891201  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:40.891280  287073 retry.go:31] will retry after 6.829574361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:41.081033  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:41.221547  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:41.221827  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:41.246742  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:41.587429  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:41.721068  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:41.721392  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:41.746985  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:42.081944  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:42.221328  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:42.221702  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:42.247369  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:42.547238  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:42.580402  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:42.720892  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:42.721420  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:42.746840  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:43.079941  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:43.221324  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:43.221387  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:43.247247  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:43.580777  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:43.720752  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:43.721007  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:43.746664  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:44.081548  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:44.221776  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:44.221866  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:44.247037  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:44.548621  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:44.582637  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:44.720924  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:44.721123  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:44.747231  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:45.082476  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:45.221961  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:45.222153  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:45.247511  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:45.580317  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:45.720245  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:45.720884  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:45.746822  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:46.080441  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:46.221173  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:46.222560  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:46.246154  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:46.583159  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:46.719929  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:46.720907  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:46.746921  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:47.048004  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:47.080925  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:47.220960  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:47.221160  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:47.246883  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:47.581251  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:47.719974  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:47.720965  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:47.721250  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:47.747325  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:48.081500  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:48.222346  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:48.223213  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:48.246606  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:48.522974  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:48.523002  287073 retry.go:31] will retry after 6.073924032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:48.579934  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:48.721002  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:48.721214  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:48.747148  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:49.080404  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:49.220155  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:49.221959  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:49.247129  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:49.547217  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:49.580150  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:49.719884  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:49.721347  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:49.746880  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:50.080620  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:50.221074  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:50.221220  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:50.246005  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:50.580113  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:50.721077  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:50.721197  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:50.746975  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:51.080039  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:51.220096  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:51.221051  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:51.246788  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:51.548073  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:51.580269  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:51.721399  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:51.721953  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:51.746761  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:52.081141  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:52.219874  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:52.220986  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:52.247271  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:52.580784  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:52.720754  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:52.720949  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:52.746832  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:53.081444  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:53.221608  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:53.221870  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:53.246879  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:53.550000  287073 node_ready.go:57] node "addons-419518" has "Ready":"False" status (will retry)
	I1009 18:29:53.580079  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:53.720019  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:53.720881  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:53.746901  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:54.080916  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:54.221504  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:54.221932  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:54.246900  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:54.586910  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:54.597301  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:29:54.722964  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:54.723098  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:54.747425  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:55.081888  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:55.222025  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:55.222400  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:55.246250  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:29:55.409982  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:55.410028  287073 retry.go:31] will retry after 15.275743812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:29:55.579852  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:55.722488  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:55.722926  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:55.750640  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:56.086041  287073 node_ready.go:49] node "addons-419518" is "Ready"
	I1009 18:29:56.086073  287073 node_ready.go:38] duration metric: took 39.541730878s for node "addons-419518" to be "Ready" ...
	I1009 18:29:56.086088  287073 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:29:56.086168  287073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:56.133821  287073 api_server.go:72] duration metric: took 41.438694191s to wait for apiserver process to appear ...
	I1009 18:29:56.133851  287073 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:29:56.133872  287073 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:29:56.171794  287073 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:29:56.177918  287073 api_server.go:141] control plane version: v1.34.1
	I1009 18:29:56.177965  287073 api_server.go:131] duration metric: took 44.091438ms to wait for apiserver health ...
	I1009 18:29:56.177998  287073 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:29:56.178406  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:56.196123  287073 system_pods.go:59] 19 kube-system pods found
	I1009 18:29:56.196174  287073 system_pods.go:61] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.196182  287073 system_pods.go:61] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending
	I1009 18:29:56.196231  287073 system_pods.go:61] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending
	I1009 18:29:56.196240  287073 system_pods.go:61] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending
	I1009 18:29:56.196246  287073 system_pods.go:61] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.196250  287073 system_pods.go:61] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.196255  287073 system_pods.go:61] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.196260  287073 system_pods.go:61] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.196300  287073 system_pods.go:61] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending
	I1009 18:29:56.196310  287073 system_pods.go:61] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.196315  287073 system_pods.go:61] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.196322  287073 system_pods.go:61] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.196331  287073 system_pods.go:61] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending
	I1009 18:29:56.196339  287073 system_pods.go:61] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.196346  287073 system_pods.go:61] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.196383  287073 system_pods.go:61] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending
	I1009 18:29:56.196400  287073 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending
	I1009 18:29:56.196408  287073 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending
	I1009 18:29:56.196419  287073 system_pods.go:61] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending
	I1009 18:29:56.196443  287073 system_pods.go:74] duration metric: took 18.436522ms to wait for pod list to return data ...
	I1009 18:29:56.196457  287073 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:29:56.204863  287073 default_sa.go:45] found service account: "default"
	I1009 18:29:56.204900  287073 default_sa.go:55] duration metric: took 8.434849ms for default service account to be created ...
	I1009 18:29:56.204910  287073 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:29:56.244331  287073 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:29:56.244357  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:56.244573  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:56.245160  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:56.245193  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.245199  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending
	I1009 18:29:56.245205  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending
	I1009 18:29:56.245209  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending
	I1009 18:29:56.245213  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.245218  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.245225  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.245229  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.245249  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending
	I1009 18:29:56.245259  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.245264  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.245272  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.245281  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending
	I1009 18:29:56.245289  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.245295  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.245303  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending
	I1009 18:29:56.245320  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.245330  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending
	I1009 18:29:56.245335  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending
	I1009 18:29:56.245351  287073 retry.go:31] will retry after 222.920831ms: missing components: kube-dns
	I1009 18:29:56.253008  287073 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:29:56.253032  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:56.501416  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:56.501467  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.501485  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:56.501491  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending
	I1009 18:29:56.501501  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending
	I1009 18:29:56.501505  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.501510  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.501527  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.501532  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.501549  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:56.501554  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.501565  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.501572  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.501576  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending
	I1009 18:29:56.501582  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.501591  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.501608  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending
	I1009 18:29:56.501616  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.501628  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.501634  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:29:56.501662  287073 retry.go:31] will retry after 268.332441ms: missing components: kube-dns
	I1009 18:29:56.598111  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:56.732338  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:56.732521  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:56.747113  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:56.776455  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:56.776503  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:56.776513  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:56.776521  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:29:56.776527  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:29:56.776532  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:56.776537  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:56.776541  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:56.776558  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:56.776571  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:56.776575  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:56.776580  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:56.776592  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:56.776600  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:29:56.776609  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:56.776619  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:56.776633  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:29:56.776642  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.776652  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:56.776662  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:29:56.776678  287073 retry.go:31] will retry after 427.584806ms: missing components: kube-dns
	I1009 18:29:57.080548  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:57.210875  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:57.210915  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:29:57.210934  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:57.210942  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:29:57.210949  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:29:57.210955  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:57.210961  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:57.210967  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:57.210971  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:57.210978  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:57.210986  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:57.210990  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:57.211006  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:57.211024  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:29:57.211031  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:57.211037  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:57.211046  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:29:57.211053  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.211062  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.211068  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:29:57.211089  287073 retry.go:31] will retry after 572.28595ms: missing components: kube-dns
	I1009 18:29:57.222182  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:57.222258  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:57.247774  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:57.580131  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:57.722254  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:57.722568  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:57.746777  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:57.788911  287073 system_pods.go:86] 19 kube-system pods found
	I1009 18:29:57.788958  287073 system_pods.go:89] "coredns-66bc5c9577-ts42b" [45df12a8-0296-4963-917c-1b76557a019e] Running
	I1009 18:29:57.788969  287073 system_pods.go:89] "csi-hostpath-attacher-0" [f516a53a-a139-4c3f-87bf-e9dbf0ba4320] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:29:57.788978  287073 system_pods.go:89] "csi-hostpath-resizer-0" [b6721174-babd-4e24-b908-4a280e886fb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:29:57.788993  287073 system_pods.go:89] "csi-hostpathplugin-p2zpw" [d2e0853e-cbe5-458c-9a50-895995d9215a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:29:57.789003  287073 system_pods.go:89] "etcd-addons-419518" [b1a5fd36-21bd-4ec5-a60e-5be3a6177e57] Running
	I1009 18:29:57.789008  287073 system_pods.go:89] "kindnet-kvxfh" [bcd2746c-8fc1-4b56-8178-16bdea496886] Running
	I1009 18:29:57.789012  287073 system_pods.go:89] "kube-apiserver-addons-419518" [78a2f472-3706-4077-afa3-2eee3df93115] Running
	I1009 18:29:57.789029  287073 system_pods.go:89] "kube-controller-manager-addons-419518" [d73128d2-8a44-4602-83d4-32b24719a95a] Running
	I1009 18:29:57.789036  287073 system_pods.go:89] "kube-ingress-dns-minikube" [2d96bc67-3394-42b8-9254-238c62f0ecaa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:29:57.789042  287073 system_pods.go:89] "kube-proxy-lrwp7" [9bb24c1f-5538-4375-b085-78c2caf8bfaa] Running
	I1009 18:29:57.789048  287073 system_pods.go:89] "kube-scheduler-addons-419518" [2da3889e-9d2d-4491-996f-21161fdd596a] Running
	I1009 18:29:57.789057  287073 system_pods.go:89] "metrics-server-85b7d694d7-qbwpc" [40c81661-30ce-4bc1-897f-19a71b656527] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:29:57.789064  287073 system_pods.go:89] "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:29:57.789071  287073 system_pods.go:89] "registry-66898fdd98-vd6nz" [460d3c65-0693-42a0-96de-9014a001640c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:29:57.789079  287073 system_pods.go:89] "registry-creds-764b6fb674-d8wvd" [db9f2892-519c-4f26-9685-f2f98ea45002] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:29:57.789088  287073 system_pods.go:89] "registry-proxy-4qmrl" [a0705d58-d129-4ab3-8470-07c26af502ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:29:57.789105  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fjjl6" [f7b456b3-a4bc-4c01-a22c-d13a0f196166] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.789116  287073 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gdcjj" [0bf4575c-2d07-43d5-a282-4f5e0c9a7d88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:29:57.789120  287073 system_pods.go:89] "storage-provisioner" [9e11b73b-439c-4773-baaa-99fa9ad58286] Running
	I1009 18:29:57.789133  287073 system_pods.go:126] duration metric: took 1.584216594s to wait for k8s-apps to be running ...
	I1009 18:29:57.789140  287073 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:29:57.789209  287073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:29:57.803298  287073 system_svc.go:56] duration metric: took 14.148271ms WaitForService to wait for kubelet
	I1009 18:29:57.803335  287073 kubeadm.go:586] duration metric: took 43.108213029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:29:57.803363  287073 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:29:57.806749  287073 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 18:29:57.806779  287073 node_conditions.go:123] node cpu capacity is 2
	I1009 18:29:57.806794  287073 node_conditions.go:105] duration metric: took 3.42493ms to run NodePressure ...
	I1009 18:29:57.806805  287073 start.go:241] waiting for startup goroutines ...
	I1009 18:29:58.081547  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:58.223087  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:58.223254  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:58.247991  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:58.591603  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:58.724192  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:58.724673  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:58.750955  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:59.081024  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:59.225492  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:59.226095  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:59.249556  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:29:59.589061  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:29:59.722265  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:29:59.722401  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:29:59.746850  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:00.080981  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:00.247203  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:00.247331  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:00.257201  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:00.594648  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:00.725319  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:00.725546  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:00.748853  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:01.082683  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:01.225106  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:01.228160  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:01.253591  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:01.593698  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:01.726897  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:01.727371  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:01.756555  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:02.082311  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:02.227983  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:02.228731  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:02.249614  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:02.579727  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:02.721535  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:02.721697  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:02.749545  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:03.079732  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:03.222923  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:03.224296  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:03.247343  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:03.590345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:03.726429  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:03.726651  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:03.747746  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:04.080539  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:04.222943  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:04.223329  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:04.246946  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:04.580776  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:04.721483  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:04.722349  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:04.746812  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:05.080947  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:05.222263  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:05.222541  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:05.246974  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:05.580016  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:05.721698  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:05.722020  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:05.747521  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:06.080731  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:06.222345  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:06.222739  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:06.247228  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:06.580688  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:06.722181  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:06.722528  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:06.747108  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:07.081263  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:07.225050  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:07.225765  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:07.246747  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:07.580494  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:07.721148  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:07.721252  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:07.746930  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:08.080290  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:08.223016  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:08.223175  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:08.248121  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:08.580376  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:08.721569  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:08.721919  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:08.747839  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:09.080950  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:09.221845  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:09.222078  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:09.247653  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:09.582905  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:09.722449  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:09.722884  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:09.747687  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:10.081088  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:10.221221  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:30:10.221440  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:10.247248  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:10.580622  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:10.686857  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:30:10.724150  287073 kapi.go:107] duration metric: took 50.007124624s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:30:10.724570  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:10.748456  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:11.081428  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:11.221966  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:11.247111  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:11.580666  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:11.722281  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:11.749588  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:11.752509  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.065605336s)
	W1009 18:30:11.752544  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:11.752562  287073 retry.go:31] will retry after 24.244993909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:12.080809  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:12.221734  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:12.247346  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:12.581186  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:12.722184  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:12.748176  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:13.082202  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:13.221596  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:13.247210  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:13.588453  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:13.722017  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:13.747064  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:14.080756  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:14.221687  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:14.247521  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:14.579803  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:14.721249  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:14.746336  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:15.081332  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:15.222308  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:15.247375  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:15.586909  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:15.721142  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:15.749054  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:16.081371  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:16.221478  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:16.247239  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:16.579975  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:16.720898  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:16.746900  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:17.083216  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:17.222226  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:17.247959  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:17.581679  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:17.720845  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:17.747055  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:18.081679  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:18.222618  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:18.248080  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:18.612134  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:18.722006  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:18.748344  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:19.082377  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:19.223078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:19.248823  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:19.579846  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:19.721415  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:19.746635  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:20.082108  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:20.221306  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:20.246993  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:20.580964  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:20.725782  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:20.747124  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:21.082172  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:21.221879  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:21.247398  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:21.580398  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:21.721635  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:21.747185  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:22.081841  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:22.221003  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:22.247714  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:22.579745  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:22.721127  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:22.747778  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:23.082115  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:23.221512  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:23.247230  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:23.580432  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:23.721923  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:23.747558  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:24.080924  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:24.221389  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:24.247389  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:24.579772  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:24.721574  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:24.746988  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:25.081349  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:25.221987  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:25.248928  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:25.579831  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:25.729833  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:25.755177  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:26.080741  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:26.221804  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:26.247507  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:26.580749  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:26.722254  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:26.747707  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:27.081363  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:27.221801  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:27.248055  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:27.580535  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:27.722163  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:27.758648  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:28.080642  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:28.222228  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:28.246218  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:28.580037  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:28.721390  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:28.758077  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:29.081696  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:29.223628  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:29.249500  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:29.581400  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:29.723214  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:29.748963  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:30.092779  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:30.222586  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:30.248754  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:30.579768  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:30.722078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:30.747100  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:31.080888  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:31.221750  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:31.247740  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:31.581184  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:31.721298  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:31.747428  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:32.079904  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:32.221193  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:32.247385  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:32.581019  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:32.721229  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:32.748042  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:33.080980  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:33.221576  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:33.247345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:33.586389  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:33.722761  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:33.747712  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:34.079955  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:34.221662  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:34.248512  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:34.580530  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:34.721873  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:34.748104  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:35.082020  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:35.221209  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:35.247138  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:35.580129  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:35.721846  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:35.746970  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:35.997917  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:30:36.080874  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:36.221001  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:36.247626  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:36.581032  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:36.721413  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:36.747377  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:37.080330  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:37.164877  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.166856527s)
	W1009 18:30:37.164968  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:37.165002  287073 retry.go:31] will retry after 32.347620857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:30:37.221207  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:37.248115  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:37.580410  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:37.722187  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:37.748627  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:38.080697  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:38.222904  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:38.248304  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:38.588344  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:38.722596  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:38.747576  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:39.080742  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:39.221467  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:39.247506  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:39.580223  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:39.721579  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:39.747903  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:40.082149  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:40.222361  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:40.249990  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:40.590991  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:40.722493  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:40.749556  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:41.080579  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:41.222055  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:41.248105  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:41.580578  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:41.722278  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:41.747383  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:42.081533  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:42.224140  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:42.324545  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:42.580644  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:42.721853  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:42.747554  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:43.080793  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:43.221206  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:43.247798  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:43.580183  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:43.721617  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:43.746841  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:44.080768  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:44.221108  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:44.248978  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:44.580109  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:44.723875  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:44.753774  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:45.085590  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:45.221979  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:45.248585  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:45.582683  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:45.720898  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:45.747040  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:46.081136  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:46.220939  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:46.248122  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:46.580506  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:46.722472  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:46.747212  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:47.080362  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:47.222078  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:47.247470  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:47.583395  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:47.723405  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:47.746313  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:48.080495  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:48.222052  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:48.247468  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:48.579499  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:48.721649  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:48.756884  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:49.081130  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:49.222445  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:49.246305  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:49.580425  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:49.721214  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:49.747616  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:50.080150  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:50.221202  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:50.247355  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:50.579517  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:50.722120  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:50.747125  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:51.080867  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:51.221234  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:51.247178  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:51.588627  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:51.722641  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:51.747191  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:52.080784  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:52.228052  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:52.250642  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:52.580883  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:52.722092  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:52.747405  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:53.082800  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:53.228696  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:53.246635  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:53.579735  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:53.722082  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:53.747481  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:54.080937  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:54.221967  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:54.247352  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:54.589166  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:54.721954  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:54.747734  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:55.094282  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:55.223338  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:55.324309  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:55.587606  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:55.722306  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:55.749251  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:56.080481  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:56.222931  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:56.247862  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:56.583830  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:56.721098  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:56.747767  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:57.080336  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:57.222041  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:57.247770  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:57.579408  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:57.721788  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:57.747034  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:58.080345  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:58.222318  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:58.246612  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:58.580290  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:58.721595  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:58.746601  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:30:59.081446  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:59.221919  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:30:59.247674  287073 kapi.go:107] duration metric: took 1m38.004370654s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:30:59.586059  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:30:59.721523  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:00.095427  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:00.227344  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:00.586437  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:00.722177  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:01.081164  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:01.221885  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:01.580837  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:01.721175  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:02.080208  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:02.221238  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:02.579564  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:02.722637  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:03.079965  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:03.221621  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:03.580165  287073 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:31:03.721185  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:04.080738  287073 kapi.go:107] duration metric: took 1m39.504163254s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:31:04.084577  287073 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-419518 cluster.
	I1009 18:31:04.088459  287073 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:31:04.092294  287073 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:31:04.223823  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:04.721708  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:05.222426  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:05.721628  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:06.221985  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:06.721887  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:07.224289  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:07.722163  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:08.222889  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:08.721612  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:09.224157  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:09.513353  287073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:31:09.721285  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:10.221824  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:10.697801  287073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.184355167s)
	W1009 18:31:10.697901  287073 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:31:10.698030  287073 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:31:10.722318  287073 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:31:11.222422  287073 kapi.go:107] duration metric: took 1m50.504528082s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:31:11.225365  287073 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, ingress-dns, registry-creds, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1009 18:31:11.228241  287073 addons.go:514] duration metric: took 1m56.53171651s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass ingress-dns registry-creds storage-provisioner nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1009 18:31:11.228298  287073 start.go:246] waiting for cluster config update ...
	I1009 18:31:11.228325  287073 start.go:255] writing updated cluster config ...
	I1009 18:31:11.228659  287073 ssh_runner.go:195] Run: rm -f paused
	I1009 18:31:11.232609  287073 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:31:11.236104  287073 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ts42b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.243006  287073 pod_ready.go:94] pod "coredns-66bc5c9577-ts42b" is "Ready"
	I1009 18:31:11.243036  287073 pod_ready.go:86] duration metric: took 6.9008ms for pod "coredns-66bc5c9577-ts42b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.245619  287073 pod_ready.go:83] waiting for pod "etcd-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.250868  287073 pod_ready.go:94] pod "etcd-addons-419518" is "Ready"
	I1009 18:31:11.250946  287073 pod_ready.go:86] duration metric: took 5.299004ms for pod "etcd-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.253249  287073 pod_ready.go:83] waiting for pod "kube-apiserver-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.258056  287073 pod_ready.go:94] pod "kube-apiserver-addons-419518" is "Ready"
	I1009 18:31:11.258170  287073 pod_ready.go:86] duration metric: took 4.850895ms for pod "kube-apiserver-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.323035  287073 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.636646  287073 pod_ready.go:94] pod "kube-controller-manager-addons-419518" is "Ready"
	I1009 18:31:11.636681  287073 pod_ready.go:86] duration metric: took 313.617028ms for pod "kube-controller-manager-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:11.837557  287073 pod_ready.go:83] waiting for pod "kube-proxy-lrwp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.237807  287073 pod_ready.go:94] pod "kube-proxy-lrwp7" is "Ready"
	I1009 18:31:12.237839  287073 pod_ready.go:86] duration metric: took 400.253129ms for pod "kube-proxy-lrwp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.437038  287073 pod_ready.go:83] waiting for pod "kube-scheduler-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.836452  287073 pod_ready.go:94] pod "kube-scheduler-addons-419518" is "Ready"
	I1009 18:31:12.836480  287073 pod_ready.go:86] duration metric: took 399.366818ms for pod "kube-scheduler-addons-419518" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:31:12.836493  287073 pod_ready.go:40] duration metric: took 1.603846792s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:31:12.892140  287073 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 18:31:12.895196  287073 out.go:179] * Done! kubectl is now configured to use "addons-419518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 18:31:10 addons-419518 crio[829]: time="2025-10-09T18:31:10.263358601Z" level=info msg="Created container b221e7724dee96c97e575e828c6f1103bb03aa577daba218d5f6fa10389caa01: ingress-nginx/ingress-nginx-controller-9cc49f96f-vm584/controller" id=0cb7a7d3-c12a-48bb-b83a-5ec8c580c629 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:31:10 addons-419518 crio[829]: time="2025-10-09T18:31:10.264176221Z" level=info msg="Starting container: b221e7724dee96c97e575e828c6f1103bb03aa577daba218d5f6fa10389caa01" id=098abb63-e44d-4c65-8349-95c700feb1ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 18:31:10 addons-419518 crio[829]: time="2025-10-09T18:31:10.268810303Z" level=info msg="Started container" PID=4885 containerID=b221e7724dee96c97e575e828c6f1103bb03aa577daba218d5f6fa10389caa01 description=ingress-nginx/ingress-nginx-controller-9cc49f96f-vm584/controller id=098abb63-e44d-4c65-8349-95c700feb1ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=6625833a2a108574981a87fe00307ab53bb7a5d6f3d6772c4862f2581f98c4a8
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.23838267Z" level=info msg="Running pod sandbox: default/busybox/POD" id=729a7a33-0987-4cb8-a99a-05a4e1df94ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.238469243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.251449539Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7 UID:f5208359-5ee5-4bec-9305-f89953e59ed6 NetNS:/var/run/netns/139e5425-3cdd-4b41-b4f3-e99084747512 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400120cc88}] Aliases:map[]}"
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.251504678Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.263727046Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7 UID:f5208359-5ee5-4bec-9305-f89953e59ed6 NetNS:/var/run/netns/139e5425-3cdd-4b41-b4f3-e99084747512 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400120cc88}] Aliases:map[]}"
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.263888926Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.271934848Z" level=info msg="Ran pod sandbox 8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7 with infra container: default/busybox/POD" id=729a7a33-0987-4cb8-a99a-05a4e1df94ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.27309775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04f8f49f-ad57-47d2-beef-615baac43cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.273259786Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=04f8f49f-ad57-47d2-beef-615baac43cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.273306883Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=04f8f49f-ad57-47d2-beef-615baac43cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.274401026Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5038964f-bd60-4f9b-83e0-7df759ac7e53 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:31:14 addons-419518 crio[829]: time="2025-10-09T18:31:14.277214833Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.124496822Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5038964f-bd60-4f9b-83e0-7df759ac7e53 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.125258574Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c8e7e10e-45c9-462a-bf86-0949467cc4b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.127232425Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ae2ffecb-e24f-4cd4-b54a-3c9b7f6a7563 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.133622216Z" level=info msg="Creating container: default/busybox/busybox" id=487b9f2c-68dc-4226-ab5a-43caa288a9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.134488149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.141496445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.142239432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.159234837Z" level=info msg="Created container 1c7d7c222a928a547654df2b608fed649ffd80f6eb55f97d06fffbafdf4c04e9: default/busybox/busybox" id=487b9f2c-68dc-4226-ab5a-43caa288a9b9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.160525207Z" level=info msg="Starting container: 1c7d7c222a928a547654df2b608fed649ffd80f6eb55f97d06fffbafdf4c04e9" id=15a4fd89-7e0c-43b3-88eb-839425c8127b name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 18:31:16 addons-419518 crio[829]: time="2025-10-09T18:31:16.16438189Z" level=info msg="Started container" PID=5057 containerID=1c7d7c222a928a547654df2b608fed649ffd80f6eb55f97d06fffbafdf4c04e9 description=default/busybox/busybox id=15a4fd89-7e0c-43b3-88eb-839425c8127b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	1c7d7c222a928       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   8fb427be72f09       busybox                                    default
	b221e7724dee9       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             14 seconds ago       Running             controller                               0                   6625833a2a108       ingress-nginx-controller-9cc49f96f-vm584   ingress-nginx
	acfe289f616bb       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             18 seconds ago       Exited              patch                                    3                   754f8a2ebbf5d       ingress-nginx-admission-patch-rv5vh        ingress-nginx
	3dfa8f6e3d04c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 21 seconds ago       Running             gcp-auth                                 0                   9811954f45084       gcp-auth-78565c9fb4-8tvnl                  gcp-auth
	e0fa427a81a17       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          26 seconds ago       Running             csi-snapshotter                          0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	5fe03c686f8b2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          27 seconds ago       Running             csi-provisioner                          0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	d925680a7d245       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            29 seconds ago       Running             liveness-probe                           0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	ca2e6448f0dce       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           30 seconds ago       Running             hostpath                                 0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	caab66aa10413       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                31 seconds ago       Running             node-driver-registrar                    0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	79ba2ae1cb348       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            32 seconds ago       Running             gadget                                   0                   cb36360f31efb       gadget-xxzz7                               gadget
	f3b3020865ddf       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             37 seconds ago       Running             local-path-provisioner                   0                   62feeecd9740e       local-path-provisioner-648f6765c9-jjjkb    local-path-storage
	46ac337e6073b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      37 seconds ago       Running             volume-snapshot-controller               0                   ca59c8f9749c8       snapshot-controller-7d9fbc56b8-gdcjj       kube-system
	98cbf51d8ed2c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   38 seconds ago       Exited              patch                                    0                   eb21c8cc51c20       gcp-auth-certs-patch-cfpxg                 gcp-auth
	8c0f1f4eee998       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      38 seconds ago       Running             volume-snapshot-controller               0                   8364d021775bc       snapshot-controller-7d9fbc56b8-fjjl6       kube-system
	b7dc868dfc33f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   40 seconds ago       Running             csi-external-health-monitor-controller   0                   bf076869b96ed       csi-hostpathplugin-p2zpw                   kube-system
	0a59f751353f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   41 seconds ago       Exited              create                                   0                   c19aa1c710a4b       ingress-nginx-admission-create-bmpfw       ingress-nginx
	ac5feaaf098c2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   42 seconds ago       Exited              create                                   0                   b0b6034d3f695       gcp-auth-certs-create-5227c                gcp-auth
	0d83359f6789f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             42 seconds ago       Running             csi-attacher                             0                   73122c095849b       csi-hostpath-attacher-0                    kube-system
	c8fcd3e8370a3       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               44 seconds ago       Running             minikube-ingress-dns                     0                   3e0055f94ae6f       kube-ingress-dns-minikube                  kube-system
	903817dace553       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     52 seconds ago       Running             nvidia-device-plugin-ctr                 0                   95b7fc628115c       nvidia-device-plugin-daemonset-qtz2j       kube-system
	52c50a08ae9a2       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   b725cddf4a0fc       cloud-spanner-emulator-86bd5cbb97-2zfvh    default
	4ffab12f1d2fe       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   09b4de9e89fae       registry-proxy-4qmrl                       kube-system
	ff3eb8e48ecc2       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   cd04638ce2edb       yakd-dashboard-5ff678cb9-hjpsv             yakd-dashboard
	88ad8c9fc37d9       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   a8999c36d49ae       csi-hostpath-resizer-0                     kube-system
	013cbdf8660e8       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           About a minute ago   Running             registry                                 0                   1c91f319ad4b3       registry-66898fdd98-vd6nz                  kube-system
	612bd221adece       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   e48ffaa6cfb7c       metrics-server-85b7d694d7-qbwpc            kube-system
	57430c58fdb35       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   96cd852ddebea       coredns-66bc5c9577-ts42b                   kube-system
	8511bdac64fcf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   5fc9ec431ae78       storage-provisioner                        kube-system
	0fdd586f76a51       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   af7d00d05d8fc       kindnet-kvxfh                              kube-system
	d6540d20da4ee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   8d34a1ddf8642       kube-proxy-lrwp7                           kube-system
	7fe0435ff5aac       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   e03c2eab84bcd       kube-scheduler-addons-419518               kube-system
	a04d990d6cdb2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   cd2f2771fd24c       etcd-addons-419518                         kube-system
	3fa30ba6794d2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   ffb3f563e0172       kube-apiserver-addons-419518               kube-system
	fea680bc13a62       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   2067cb7d0ec25       kube-controller-manager-addons-419518      kube-system
	
	
	==> coredns [57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9] <==
	[INFO] 10.244.0.4:34379 - 1917 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000075381s
	[INFO] 10.244.0.4:34379 - 7857 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002462942s
	[INFO] 10.244.0.4:34379 - 18840 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002378536s
	[INFO] 10.244.0.4:34379 - 61047 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000295296s
	[INFO] 10.244.0.4:34379 - 8853 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.0001163s
	[INFO] 10.244.0.4:48062 - 54173 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193601s
	[INFO] 10.244.0.4:48062 - 54403 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000277547s
	[INFO] 10.244.0.4:36761 - 29102 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110573s
	[INFO] 10.244.0.4:36761 - 28881 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088714s
	[INFO] 10.244.0.4:41135 - 42835 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097666s
	[INFO] 10.244.0.4:41135 - 42614 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067258s
	[INFO] 10.244.0.4:45410 - 62091 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001721638s
	[INFO] 10.244.0.4:45410 - 61894 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001744227s
	[INFO] 10.244.0.4:46810 - 30759 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00018332s
	[INFO] 10.244.0.4:46810 - 30578 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096517s
	[INFO] 10.244.0.20:39207 - 22968 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000310557s
	[INFO] 10.244.0.20:48270 - 64397 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000198081s
	[INFO] 10.244.0.20:52084 - 21293 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149547s
	[INFO] 10.244.0.20:37321 - 41772 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000299743s
	[INFO] 10.244.0.20:49323 - 34038 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015534s
	[INFO] 10.244.0.20:54315 - 64541 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162462s
	[INFO] 10.244.0.20:40486 - 26803 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002195703s
	[INFO] 10.244.0.20:49244 - 40289 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001827611s
	[INFO] 10.244.0.20:49308 - 27729 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003172331s
	[INFO] 10.244.0.20:58620 - 50870 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003394904s
	
	
	==> describe nodes <==
	Name:               addons-419518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-419518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=addons-419518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T18_29_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-419518
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-419518"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 18:29:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-419518
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 18:31:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 18:31:12 +0000   Thu, 09 Oct 2025 18:29:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 18:31:12 +0000   Thu, 09 Oct 2025 18:29:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 18:31:12 +0000   Thu, 09 Oct 2025 18:29:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 18:31:12 +0000   Thu, 09 Oct 2025 18:29:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-419518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0aaba348a6d44fe9392eb659a3419c0
	  System UUID:                84f031e7-c237-48a4-afe2-ac0fc5df6eb2
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-2zfvh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  gadget                      gadget-xxzz7                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  gcp-auth                    gcp-auth-78565c9fb4-8tvnl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-vm584    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m4s
	  kube-system                 coredns-66bc5c9577-ts42b                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m10s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 csi-hostpathplugin-p2zpw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 etcd-addons-419518                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m15s
	  kube-system                 kindnet-kvxfh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-addons-419518                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-addons-419518       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-lrwp7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-addons-419518                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 metrics-server-85b7d694d7-qbwpc             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m5s
	  kube-system                 nvidia-device-plugin-daemonset-qtz2j        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-66898fdd98-vd6nz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-creds-764b6fb674-d8wvd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 registry-proxy-4qmrl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 snapshot-controller-7d9fbc56b8-fjjl6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-gdcjj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  local-path-storage          local-path-provisioner-648f6765c9-jjjkb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-hjpsv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m8s                   kube-proxy       
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node addons-419518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node addons-419518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s (x8 over 2m23s)  kubelet          Node addons-419518 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m15s                  kubelet          Node addons-419518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m15s                  kubelet          Node addons-419518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m15s                  kubelet          Node addons-419518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m11s                  node-controller  Node addons-419518 event: Registered Node addons-419518 in Controller
	  Normal   NodeReady                89s                    kubelet          Node addons-419518 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014502] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.555614] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757222] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.781088] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 14209023 ns
	[Oct 9 18:26] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 18:29] overlayfs: idmapped layers are currently not supported
	[  +0.074293] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d] <==
	{"level":"warn","ts":"2025-10-09T18:29:04.960829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:04.995746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.020008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.053812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.077820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.109618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.139672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.162768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.189982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.218357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.243739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.304637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.319368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.341944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.367339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.390986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.406320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.423061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:05.518248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:21.569884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:21.591768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.417450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.431728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.479759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:29:43.494561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38804","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [3dfa8f6e3d04c4610e97aec90197e6d5807c4a90d87755a7c02194eda4f6660a] <==
	2025/10/09 18:31:03 GCP Auth Webhook started!
	2025/10/09 18:31:13 Ready to marshal response ...
	2025/10/09 18:31:13 Ready to write response ...
	2025/10/09 18:31:13 Ready to marshal response ...
	2025/10/09 18:31:13 Ready to write response ...
	2025/10/09 18:31:14 Ready to marshal response ...
	2025/10/09 18:31:14 Ready to write response ...
	
	
	==> kernel <==
	 18:31:24 up  1:13,  0 user,  load average: 2.60, 2.61, 3.17
	Linux addons-419518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409] <==
	E1009 18:29:45.157786       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 18:29:45.157830       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 18:29:45.157934       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 18:29:45.158107       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 18:29:46.656521       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 18:29:46.656594       1 metrics.go:72] Registering metrics
	I1009 18:29:46.656655       1 controller.go:711] "Syncing nftables rules"
	I1009 18:29:55.155370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:29:55.155429       1 main.go:301] handling current node
	I1009 18:30:05.154446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:30:05.154485       1 main.go:301] handling current node
	I1009 18:30:15.155660       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:30:15.155699       1 main.go:301] handling current node
	I1009 18:30:25.155022       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:30:25.155059       1 main.go:301] handling current node
	I1009 18:30:35.154564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:30:35.154600       1 main.go:301] handling current node
	I1009 18:30:45.155334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:30:45.155369       1 main.go:301] handling current node
	I1009 18:30:55.154628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:30:55.154768       1 main.go:301] handling current node
	I1009 18:31:05.154447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:31:05.154493       1 main.go:301] handling current node
	I1009 18:31:15.154642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:31:15.154676       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade] <==
	I1009 18:29:20.996432       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1009 18:29:21.144670       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.79.26"}
	W1009 18:29:21.569300       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1009 18:29:21.588718       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1009 18:29:24.399980       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.235.159"}
	W1009 18:29:43.417411       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:43.431756       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:43.478936       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:43.493698       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 18:29:55.747005       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.235.159:443: connect: connection refused
	E1009 18:29:55.747134       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.235.159:443: connect: connection refused" logger="UnhandledError"
	W1009 18:29:55.747538       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.235.159:443: connect: connection refused
	E1009 18:29:55.747606       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.235.159:443: connect: connection refused" logger="UnhandledError"
	W1009 18:29:55.824524       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.235.159:443: connect: connection refused
	E1009 18:29:55.825371       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.235.159:443: connect: connection refused" logger="UnhandledError"
	W1009 18:30:11.360177       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 18:30:11.360252       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 18:30:11.360590       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.152.104:443: connect: connection refused" logger="UnhandledError"
	E1009 18:30:11.364174       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.152.104:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.152.104:443: connect: connection refused" logger="UnhandledError"
	I1009 18:30:11.484383       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1009 18:31:22.221251       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57710: use of closed network connection
	E1009 18:31:22.443157       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57750: use of closed network connection
	
	
	==> kube-controller-manager [fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223] <==
	I1009 18:29:13.421787       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 18:29:13.421856       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 18:29:13.421880       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 18:29:13.421940       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 18:29:13.421961       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 18:29:13.431234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:29:13.432293       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-419518" podCIDRs=["10.244.0.0/24"]
	I1009 18:29:13.444554       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 18:29:13.444659       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 18:29:13.444564       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 18:29:13.444754       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-419518"
	I1009 18:29:13.444801       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 18:29:13.447136       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 18:29:13.447144       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 18:29:13.448286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 18:29:13.451128       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1009 18:29:19.625982       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1009 18:29:43.410429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 18:29:43.410592       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1009 18:29:43.410644       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1009 18:29:43.455818       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1009 18:29:43.471869       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1009 18:29:43.510807       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 18:29:43.572629       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:29:58.454312       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173] <==
	I1009 18:29:15.082486       1 server_linux.go:53] "Using iptables proxy"
	I1009 18:29:15.394040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 18:29:15.497067       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 18:29:15.497101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 18:29:15.497187       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:29:15.587124       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:29:15.590443       1 server_linux.go:132] "Using iptables Proxier"
	I1009 18:29:15.596070       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:29:15.596427       1 server.go:527] "Version info" version="v1.34.1"
	I1009 18:29:15.596451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:29:15.597755       1 config.go:200] "Starting service config controller"
	I1009 18:29:15.597778       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 18:29:15.597795       1 config.go:106] "Starting endpoint slice config controller"
	I1009 18:29:15.597799       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 18:29:15.597809       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 18:29:15.597813       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 18:29:15.603858       1 config.go:309] "Starting node config controller"
	I1009 18:29:15.603876       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 18:29:15.603884       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 18:29:15.698089       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 18:29:15.698145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 18:29:15.698187       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa] <==
	E1009 18:29:06.569750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 18:29:06.569806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 18:29:06.569857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 18:29:06.569902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 18:29:06.569992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 18:29:06.570042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 18:29:06.570084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 18:29:06.571706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 18:29:06.571778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 18:29:06.571850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 18:29:06.571913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 18:29:06.571962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 18:29:06.571999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 18:29:06.572042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 18:29:06.572092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 18:29:06.572128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 18:29:06.573024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 18:29:06.573120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 18:29:07.392210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 18:29:07.421855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 18:29:07.535347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 18:29:07.555047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 18:29:07.617419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 18:29:07.801055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 18:29:09.536477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 18:30:53 addons-419518 kubelet[1283]: E1009 18:30:53.946277    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"patch\" with CrashLoopBackOff: \"back-off 20s restarting failed container=patch pod=ingress-nginx-admission-patch-rv5vh_ingress-nginx(699cbdff-3d28-48eb-80f9-8fc886bf7f09)\"" pod="ingress-nginx/ingress-nginx-admission-patch-rv5vh" podUID="699cbdff-3d28-48eb-80f9-8fc886bf7f09"
	Oct 09 18:30:53 addons-419518 kubelet[1283]: I1009 18:30:53.967161    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-xxzz7" podStartSLOduration=72.723825665 podStartE2EDuration="1m34.967143343s" podCreationTimestamp="2025-10-09 18:29:19 +0000 UTC" firstStartedPulling="2025-10-09 18:30:29.191026116 +0000 UTC m=+80.429354194" lastFinishedPulling="2025-10-09 18:30:51.434343794 +0000 UTC m=+102.672671872" observedRunningTime="2025-10-09 18:30:51.706014247 +0000 UTC m=+102.944342325" watchObservedRunningTime="2025-10-09 18:30:53.967143343 +0000 UTC m=+105.205471429"
	Oct 09 18:30:55 addons-419518 kubelet[1283]: I1009 18:30:55.131006    1283 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 09 18:30:55 addons-419518 kubelet[1283]: I1009 18:30:55.131106    1283 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 09 18:30:58 addons-419518 kubelet[1283]: I1009 18:30:58.800829    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-p2zpw" podStartSLOduration=2.486539171 podStartE2EDuration="1m3.800798941s" podCreationTimestamp="2025-10-09 18:29:55 +0000 UTC" firstStartedPulling="2025-10-09 18:29:56.802042729 +0000 UTC m=+48.040370807" lastFinishedPulling="2025-10-09 18:30:58.116302491 +0000 UTC m=+109.354630577" observedRunningTime="2025-10-09 18:30:58.799231632 +0000 UTC m=+110.037559718" watchObservedRunningTime="2025-10-09 18:30:58.800798941 +0000 UTC m=+110.039127027"
	Oct 09 18:30:59 addons-419518 kubelet[1283]: E1009 18:30:59.813876    1283 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 09 18:30:59 addons-419518 kubelet[1283]: E1009 18:30:59.813964    1283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db9f2892-519c-4f26-9685-f2f98ea45002-gcr-creds podName:db9f2892-519c-4f26-9685-f2f98ea45002 nodeName:}" failed. No retries permitted until 2025-10-09 18:32:03.813944987 +0000 UTC m=+175.052273065 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/db9f2892-519c-4f26-9685-f2f98ea45002-gcr-creds") pod "registry-creds-764b6fb674-d8wvd" (UID: "db9f2892-519c-4f26-9685-f2f98ea45002") : secret "registry-creds-gcr" not found
	Oct 09 18:31:00 addons-419518 kubelet[1283]: W1009 18:31:00.175195    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/crio-9811954f45084f15a352ab3dadccfffdc905034b573da2c9015056cfa58e2f77 WatchSource:0}: Error finding container 9811954f45084f15a352ab3dadccfffdc905034b573da2c9015056cfa58e2f77: Status 404 returned error can't find the container with id 9811954f45084f15a352ab3dadccfffdc905034b573da2c9015056cfa58e2f77
	Oct 09 18:31:00 addons-419518 kubelet[1283]: W1009 18:31:00.250428    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/crio-6625833a2a108574981a87fe00307ab53bb7a5d6f3d6772c4862f2581f98c4a8 WatchSource:0}: Error finding container 6625833a2a108574981a87fe00307ab53bb7a5d6f3d6772c4862f2581f98c4a8: Status 404 returned error can't find the container with id 6625833a2a108574981a87fe00307ab53bb7a5d6f3d6772c4862f2581f98c4a8
	Oct 09 18:31:05 addons-419518 kubelet[1283]: I1009 18:31:05.946606    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-vd6nz" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:31:05 addons-419518 kubelet[1283]: I1009 18:31:05.946712    1283 scope.go:117] "RemoveContainer" containerID="4acc4375a3181e48f5cb0806a826ca752b11613b762854affa75752a855137c0"
	Oct 09 18:31:06 addons-419518 kubelet[1283]: I1009 18:31:06.857172    1283 scope.go:117] "RemoveContainer" containerID="4acc4375a3181e48f5cb0806a826ca752b11613b762854affa75752a855137c0"
	Oct 09 18:31:06 addons-419518 kubelet[1283]: I1009 18:31:06.878828    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-8tvnl" podStartSLOduration=99.821426462 podStartE2EDuration="1m42.87880714s" podCreationTimestamp="2025-10-09 18:29:24 +0000 UTC" firstStartedPulling="2025-10-09 18:31:00.182616707 +0000 UTC m=+111.420944785" lastFinishedPulling="2025-10-09 18:31:03.239997385 +0000 UTC m=+114.478325463" observedRunningTime="2025-10-09 18:31:03.844413241 +0000 UTC m=+115.082741335" watchObservedRunningTime="2025-10-09 18:31:06.87880714 +0000 UTC m=+118.117135226"
	Oct 09 18:31:08 addons-419518 kubelet[1283]: I1009 18:31:08.404398    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwgvm\" (UniqueName: \"kubernetes.io/projected/699cbdff-3d28-48eb-80f9-8fc886bf7f09-kube-api-access-jwgvm\") pod \"699cbdff-3d28-48eb-80f9-8fc886bf7f09\" (UID: \"699cbdff-3d28-48eb-80f9-8fc886bf7f09\") "
	Oct 09 18:31:08 addons-419518 kubelet[1283]: I1009 18:31:08.411101    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699cbdff-3d28-48eb-80f9-8fc886bf7f09-kube-api-access-jwgvm" (OuterVolumeSpecName: "kube-api-access-jwgvm") pod "699cbdff-3d28-48eb-80f9-8fc886bf7f09" (UID: "699cbdff-3d28-48eb-80f9-8fc886bf7f09"). InnerVolumeSpecName "kube-api-access-jwgvm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 09 18:31:08 addons-419518 kubelet[1283]: I1009 18:31:08.504973    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwgvm\" (UniqueName: \"kubernetes.io/projected/699cbdff-3d28-48eb-80f9-8fc886bf7f09-kube-api-access-jwgvm\") on node \"addons-419518\" DevicePath \"\""
	Oct 09 18:31:08 addons-419518 kubelet[1283]: I1009 18:31:08.869084    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754f8a2ebbf5d66d5d0d9c8797b459f1ef4a12cdfaba06149f755143441a127e"
	Oct 09 18:31:10 addons-419518 kubelet[1283]: I1009 18:31:10.903028    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-vm584" podStartSLOduration=100.955937899 podStartE2EDuration="1m50.903003163s" podCreationTimestamp="2025-10-09 18:29:20 +0000 UTC" firstStartedPulling="2025-10-09 18:31:00.264303568 +0000 UTC m=+111.502631646" lastFinishedPulling="2025-10-09 18:31:10.211368832 +0000 UTC m=+121.449696910" observedRunningTime="2025-10-09 18:31:10.900754726 +0000 UTC m=+122.139082804" watchObservedRunningTime="2025-10-09 18:31:10.903003163 +0000 UTC m=+122.141331249"
	Oct 09 18:31:14 addons-419518 kubelet[1283]: I1009 18:31:14.055709    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5208359-5ee5-4bec-9305-f89953e59ed6-gcp-creds\") pod \"busybox\" (UID: \"f5208359-5ee5-4bec-9305-f89953e59ed6\") " pod="default/busybox"
	Oct 09 18:31:14 addons-419518 kubelet[1283]: I1009 18:31:14.056412    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s9md\" (UniqueName: \"kubernetes.io/projected/f5208359-5ee5-4bec-9305-f89953e59ed6-kube-api-access-9s9md\") pod \"busybox\" (UID: \"f5208359-5ee5-4bec-9305-f89953e59ed6\") " pod="default/busybox"
	Oct 09 18:31:14 addons-419518 kubelet[1283]: W1009 18:31:14.270823    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/56d0a47d6947a4c39a7432f0a6969969d66741c515f2c58d1cc0c569e0fe8321/crio-8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7 WatchSource:0}: Error finding container 8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7: Status 404 returned error can't find the container with id 8fb427be72f0946a98d5e8d9abd36d03a952cc194aa71b6f4c7e34f7ca40eeb7
	Oct 09 18:31:14 addons-419518 kubelet[1283]: I1009 18:31:14.949036    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc4bd26-b1fa-47f2-be3b-724f108c4df0" path="/var/lib/kubelet/pods/cbc4bd26-b1fa-47f2-be3b-724f108c4df0/volumes"
	Oct 09 18:31:16 addons-419518 kubelet[1283]: I1009 18:31:16.924408    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.071899454 podStartE2EDuration="3.924378786s" podCreationTimestamp="2025-10-09 18:31:13 +0000 UTC" firstStartedPulling="2025-10-09 18:31:14.273620789 +0000 UTC m=+125.511948866" lastFinishedPulling="2025-10-09 18:31:16.12610012 +0000 UTC m=+127.364428198" observedRunningTime="2025-10-09 18:31:16.923088269 +0000 UTC m=+128.161416346" watchObservedRunningTime="2025-10-09 18:31:16.924378786 +0000 UTC m=+128.162706864"
	Oct 09 18:31:18 addons-419518 kubelet[1283]: I1009 18:31:18.949217    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00bd7a7e-9721-4fa2-83ee-0956d251657f" path="/var/lib/kubelet/pods/00bd7a7e-9721-4fa2-83ee-0956d251657f/volumes"
	Oct 09 18:31:22 addons-419518 kubelet[1283]: E1009 18:31:22.443606    1283 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52406->127.0.0.1:33437: write tcp 127.0.0.1:52406->127.0.0.1:33437: write: broken pipe
	
	
	==> storage-provisioner [8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17] <==
	W1009 18:30:59.187903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:01.191485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:01.196374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:03.199729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:03.203965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:05.208270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:05.217325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:07.223118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:07.228439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:09.232034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:09.246443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:11.250017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:11.258331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:13.263069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:13.267724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:15.271133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:15.275503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:17.278708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:17.285667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:19.289274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:19.295097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:21.298172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:21.303436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:23.306292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:31:23.311786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-419518 -n addons-419518
helpers_test.go:269: (dbg) Run:  kubectl --context addons-419518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh registry-creds-764b6fb674-d8wvd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-419518 describe pod ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh registry-creds-764b6fb674-d8wvd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-419518 describe pod ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh registry-creds-764b6fb674-d8wvd: exit status 1 (91.735002ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bmpfw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rv5vh" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-d8wvd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-419518 describe pod ingress-nginx-admission-create-bmpfw ingress-nginx-admission-patch-rv5vh registry-creds-764b6fb674-d8wvd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable headlamp --alsologtostderr -v=1: exit status 11 (277.39898ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:31:25.865233  293735 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:31:25.866082  293735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:25.866098  293735 out.go:374] Setting ErrFile to fd 2...
	I1009 18:31:25.866104  293735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:31:25.866412  293735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:31:25.866760  293735 mustload.go:65] Loading cluster: addons-419518
	I1009 18:31:25.867126  293735 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:25.867144  293735 addons.go:606] checking whether the cluster is paused
	I1009 18:31:25.867254  293735 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:31:25.867275  293735 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:31:25.867745  293735 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:31:25.892747  293735 ssh_runner.go:195] Run: systemctl --version
	I1009 18:31:25.892809  293735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:31:25.911878  293735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:31:26.018240  293735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:31:26.018369  293735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:31:26.054192  293735 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:31:26.054216  293735 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:31:26.054221  293735 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:31:26.054225  293735 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:31:26.054228  293735 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:31:26.054232  293735 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:31:26.054235  293735 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:31:26.054238  293735 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:31:26.054241  293735 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:31:26.054249  293735 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:31:26.054252  293735 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:31:26.054256  293735 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:31:26.054259  293735 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:31:26.054263  293735 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:31:26.054266  293735 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:31:26.054277  293735 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:31:26.054285  293735 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:31:26.054290  293735 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:31:26.054293  293735 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:31:26.054296  293735 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:31:26.054302  293735 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:31:26.054306  293735 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:31:26.054308  293735 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:31:26.054311  293735 cri.go:89] found id: ""
	I1009 18:31:26.054362  293735 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:31:26.072610  293735 out.go:203] 
	W1009 18:31:26.075482  293735 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:31:26.075512  293735 out.go:285] * 
	* 
	W1009 18:31:26.081895  293735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:31:26.084946  293735 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-2zfvh" [c7654723-d414-47a6-b103-c7682b9d5853] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003437168s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (272.667039ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:35.721213  295632 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:35.721952  295632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:35.721969  295632 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:35.721977  295632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:35.722811  295632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:35.723232  295632 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:35.723719  295632 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:35.723772  295632 addons.go:606] checking whether the cluster is paused
	I1009 18:32:35.723956  295632 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:35.723998  295632 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:35.724540  295632 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:35.744581  295632 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:35.744649  295632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:35.763562  295632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:35.864933  295632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:35.865032  295632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:35.904571  295632 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:35.904601  295632 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:35.904606  295632 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:35.904610  295632 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:35.904613  295632 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:35.904618  295632 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:35.904622  295632 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:35.904634  295632 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:35.904638  295632 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:35.904644  295632 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:35.904647  295632 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:35.904651  295632 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:35.904655  295632 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:35.904659  295632 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:35.904662  295632 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:35.904667  295632 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:35.904670  295632 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:35.904674  295632 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:35.904677  295632 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:35.904680  295632 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:35.904687  295632 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:35.904696  295632 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:35.904699  295632 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:35.904710  295632 cri.go:89] found id: ""
	I1009 18:32:35.904766  295632 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:35.923818  295632 out.go:203] 
	W1009 18:32:35.926684  295632 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:35.926708  295632 out.go:285] * 
	* 
	W1009 18:32:35.933038  295632 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:35.935878  295632 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-419518 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-419518 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2c5cc733-9a5d-40da-9fad-eabcbf4866a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2c5cc733-9a5d-40da-9fad-eabcbf4866a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2c5cc733-9a5d-40da-9fad-eabcbf4866a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003818707s
addons_test.go:967: (dbg) Run:  kubectl --context addons-419518 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 ssh "cat /opt/local-path-provisioner/pvc-d2bf55d1-4477-4a8c-afa5-aa8f7149764a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-419518 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-419518 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (287.158758ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:18.888099  295394 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:18.888901  295394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:18.888939  295394 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:18.888960  295394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:18.889287  295394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:18.889769  295394 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:18.890378  295394 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:18.890423  295394 addons.go:606] checking whether the cluster is paused
	I1009 18:32:18.890581  295394 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:18.890681  295394 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:18.891185  295394 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:18.913873  295394 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:18.913927  295394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:18.931583  295394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:19.040566  295394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:19.040646  295394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:19.072118  295394 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:19.072139  295394 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:19.072145  295394 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:19.072150  295394 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:19.072153  295394 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:19.072158  295394 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:19.072161  295394 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:19.072165  295394 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:19.072169  295394 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:19.072179  295394 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:19.072183  295394 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:19.072187  295394 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:19.072195  295394 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:19.072199  295394 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:19.072210  295394 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:19.072218  295394 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:19.072221  295394 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:19.072226  295394 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:19.072229  295394 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:19.072246  295394 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:19.072256  295394 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:19.072260  295394 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:19.072264  295394 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:19.072267  295394 cri.go:89] found id: ""
	I1009 18:32:19.072318  295394 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:19.100050  295394 out.go:203] 
	W1009 18:32:19.103136  295394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:19.103218  295394 out.go:285] * 
	* 
	W1009 18:32:19.110032  295394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:19.113472  295394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qtz2j" [6d18c154-6d7c-45c6-ba26-974eccccc490] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003920794s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (259.541134ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:30.448513  295572 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:30.449221  295572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:30.449241  295572 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:30.449248  295572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:30.449513  295572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:30.449825  295572 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:30.450277  295572 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:30.450299  295572 addons.go:606] checking whether the cluster is paused
	I1009 18:32:30.450417  295572 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:30.450437  295572 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:30.450908  295572 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:30.473369  295572 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:30.473476  295572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:30.491525  295572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:30.596566  295572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:30.596661  295572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:30.626445  295572 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:30.626470  295572 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:30.626476  295572 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:30.626481  295572 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:30.626484  295572 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:30.626487  295572 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:30.626490  295572 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:30.626493  295572 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:30.626496  295572 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:30.626504  295572 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:30.626508  295572 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:30.626511  295572 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:30.626515  295572 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:30.626518  295572 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:30.626521  295572 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:30.626531  295572 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:30.626539  295572 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:30.626548  295572 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:30.626551  295572 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:30.626554  295572 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:30.626559  295572 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:30.626565  295572 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:30.626568  295572 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:30.626571  295572 cri.go:89] found id: ""
	I1009 18:32:30.626625  295572 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:30.641855  295572 out.go:203] 
	W1009 18:32:30.644774  295572 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:30.644802  295572 out.go:285] * 
	* 
	W1009 18:32:30.651316  295572 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:30.654452  295572 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-hjpsv" [2c83e08b-4e01-4b7c-ad5a-3af131c7e603] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002814709s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-419518 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-419518 addons disable yakd --alsologtostderr -v=1: exit status 11 (271.328449ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:32:25.170382  295513 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:32:25.171577  295513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:25.171624  295513 out.go:374] Setting ErrFile to fd 2...
	I1009 18:32:25.171645  295513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:32:25.171974  295513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:32:25.172353  295513 mustload.go:65] Loading cluster: addons-419518
	I1009 18:32:25.172826  295513 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:25.172870  295513 addons.go:606] checking whether the cluster is paused
	I1009 18:32:25.173027  295513 config.go:182] Loaded profile config "addons-419518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:32:25.173065  295513 host.go:66] Checking if "addons-419518" exists ...
	I1009 18:32:25.173603  295513 cli_runner.go:164] Run: docker container inspect addons-419518 --format={{.State.Status}}
	I1009 18:32:25.191794  295513 ssh_runner.go:195] Run: systemctl --version
	I1009 18:32:25.191856  295513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-419518
	I1009 18:32:25.209809  295513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/addons-419518/id_rsa Username:docker}
	I1009 18:32:25.312674  295513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:32:25.312762  295513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:32:25.359320  295513 cri.go:89] found id: "e0fa427a81a17f6795a8090103f9727a2f193a3b3ad2bcd5d94a736d8ca0084e"
	I1009 18:32:25.359344  295513 cri.go:89] found id: "5fe03c686f8b2c63022139a2974897e4e81c34f14b173bc4773da7d89c5113fb"
	I1009 18:32:25.359349  295513 cri.go:89] found id: "d925680a7d245f754d009cf0dc84730de2241c69617ddd30733a38c852dc6f4a"
	I1009 18:32:25.359353  295513 cri.go:89] found id: "ca2e6448f0dce5ce304e1be69cae94c96f5be33b4b2d71a9fa66f2e9975c6974"
	I1009 18:32:25.359356  295513 cri.go:89] found id: "caab66aa104130d13b0bdccb9015e98f9b33af3855fa2151974c50cc1b7028fb"
	I1009 18:32:25.359365  295513 cri.go:89] found id: "46ac337e6073b1761949e4bb550d06952cc6a8acd72a9299ada487f491f9ad4c"
	I1009 18:32:25.359368  295513 cri.go:89] found id: "8c0f1f4eee9981d18f61d21ccbd57a64df8453ef83f30ddc76cae390ac3d3608"
	I1009 18:32:25.359371  295513 cri.go:89] found id: "b7dc868dfc33fd2375ea748cf7b35725f4951c71bf0e2d9a6d453b9b6c6165b5"
	I1009 18:32:25.359374  295513 cri.go:89] found id: "0d83359f6789f3add9a472daf958af079c0ebbf491264365db73e5ba91cd0d19"
	I1009 18:32:25.359384  295513 cri.go:89] found id: "c8fcd3e8370a3c98f4c36a8b35a639cbc31f979f9feb70ac9ee1a978fcb613e8"
	I1009 18:32:25.359395  295513 cri.go:89] found id: "903817dace553d8e09dfc41ddc5229c99562cffefa55b8952ef6bedae51bb9fa"
	I1009 18:32:25.359398  295513 cri.go:89] found id: "4ffab12f1d2fe330e9dee68f010b23eb3bc18c123a4d26161c2a530cb4bd2345"
	I1009 18:32:25.359401  295513 cri.go:89] found id: "88ad8c9fc37d9568ea7ec14bf8754fda332b5655b73f5b418f5de6faed0a5046"
	I1009 18:32:25.359405  295513 cri.go:89] found id: "013cbdf8660e8559d83c12607fbb78f42cfa68e95f131ed0e80d8b3fde5804f0"
	I1009 18:32:25.359412  295513 cri.go:89] found id: "612bd221adeceb41302a744d0e620665030a808b876379fb1f1f616bf2638c79"
	I1009 18:32:25.359417  295513 cri.go:89] found id: "57430c58fdb35b3b948a1f133b92fe1c99867907100e345fd19b6cd1c2f5aad9"
	I1009 18:32:25.359421  295513 cri.go:89] found id: "8511bdac64fcf1cc85702b69dbed87bba318265e4b4870dc1ddb321784c0ab17"
	I1009 18:32:25.359428  295513 cri.go:89] found id: "0fdd586f76a51d4e5f71f87f7372d4adee855a8797926c96a216942888669409"
	I1009 18:32:25.359431  295513 cri.go:89] found id: "d6540d20da4eea572a70f29b242732c2c22bce1c13c1e77c055f2b20b6101173"
	I1009 18:32:25.359434  295513 cri.go:89] found id: "7fe0435ff5aacc41521449f9d3e3e63bf379462f36404942013f3d0306df2ffa"
	I1009 18:32:25.359440  295513 cri.go:89] found id: "a04d990d6cdb2ec218780cf468b620568c1b162696103d3db4596d1c0b82d84d"
	I1009 18:32:25.359446  295513 cri.go:89] found id: "3fa30ba6794d26cf98fce12bbcc06c5266d540183c32fa04afe3827d17e51ade"
	I1009 18:32:25.359449  295513 cri.go:89] found id: "fea680bc13a62790226dade6b7a0b15d6891c564facbac9a61612d43790e1223"
	I1009 18:32:25.359452  295513 cri.go:89] found id: ""
	I1009 18:32:25.359506  295513 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 18:32:25.375375  295513 out.go:203] 
	W1009 18:32:25.378354  295513 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:32:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 18:32:25.378382  295513 out.go:285] * 
	* 
	W1009 18:32:25.384867  295513 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:32:25.387920  295513 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-419518 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestForceSystemdFlag (518.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-476949 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1009 19:25:40.221157  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-476949 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m34.418959582s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-476949] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-476949" primary control-plane node in "force-systemd-flag-476949" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:24:29.676676  442831 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:24:29.676813  442831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:24:29.676818  442831 out.go:374] Setting ErrFile to fd 2...
	I1009 19:24:29.676824  442831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:24:29.677180  442831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:24:29.677601  442831 out.go:368] Setting JSON to false
	I1009 19:24:29.679740  442831 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7621,"bootTime":1760030249,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:24:29.679820  442831 start.go:141] virtualization:  
	I1009 19:24:29.685934  442831 out.go:179] * [force-systemd-flag-476949] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:24:29.689156  442831 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:24:29.689200  442831 notify.go:220] Checking for updates...
	I1009 19:24:29.695307  442831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:24:29.698317  442831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:24:29.701252  442831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:24:29.704234  442831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:24:29.709053  442831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:24:29.712945  442831 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:24:29.713117  442831 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:24:29.744138  442831 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:24:29.744277  442831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:24:29.842568  442831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-09 19:24:29.831697643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:24:29.842689  442831 docker.go:318] overlay module found
	I1009 19:24:29.845689  442831 out.go:179] * Using the docker driver based on user configuration
	I1009 19:24:29.848588  442831 start.go:305] selected driver: docker
	I1009 19:24:29.848610  442831 start.go:925] validating driver "docker" against <nil>
	I1009 19:24:29.848625  442831 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:24:29.849385  442831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:24:29.945187  442831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-10-09 19:24:29.927897662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:24:29.945400  442831 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:24:29.945633  442831 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:24:29.948779  442831 out.go:179] * Using Docker driver with root privileges
	I1009 19:24:29.951524  442831 cni.go:84] Creating CNI manager for ""
	I1009 19:24:29.951671  442831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:24:29.951691  442831 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:24:29.951838  442831 start.go:349] cluster config:
	{Name:force-systemd-flag-476949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-476949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:24:29.959952  442831 out.go:179] * Starting "force-systemd-flag-476949" primary control-plane node in "force-systemd-flag-476949" cluster
	I1009 19:24:29.962855  442831 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:24:29.965950  442831 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:24:29.968905  442831 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:24:29.968967  442831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:24:29.968978  442831 cache.go:64] Caching tarball of preloaded images
	I1009 19:24:29.969095  442831 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:24:29.969110  442831 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:24:29.969223  442831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/config.json ...
	I1009 19:24:29.969247  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/config.json: {Name:mka235abae4c644d738094cca2e2b99d0d0642d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:29.969419  442831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:24:29.995360  442831 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:24:29.995386  442831 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:24:29.995405  442831 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:24:29.995430  442831 start.go:360] acquireMachinesLock for force-systemd-flag-476949: {Name:mka2a711522553c255cda139f582808be7709c66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:24:29.995538  442831 start.go:364] duration metric: took 90.143µs to acquireMachinesLock for "force-systemd-flag-476949"
	I1009 19:24:29.995571  442831 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-476949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-476949 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:24:29.995642  442831 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:24:29.999210  442831 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:24:29.999475  442831 start.go:159] libmachine.API.Create for "force-systemd-flag-476949" (driver="docker")
	I1009 19:24:29.999519  442831 client.go:168] LocalClient.Create starting
	I1009 19:24:29.999596  442831 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:24:29.999636  442831 main.go:141] libmachine: Decoding PEM data...
	I1009 19:24:29.999661  442831 main.go:141] libmachine: Parsing certificate...
	I1009 19:24:29.999721  442831 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:24:29.999747  442831 main.go:141] libmachine: Decoding PEM data...
	I1009 19:24:29.999769  442831 main.go:141] libmachine: Parsing certificate...
	I1009 19:24:30.000161  442831 cli_runner.go:164] Run: docker network inspect force-systemd-flag-476949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:24:30.047950  442831 cli_runner.go:211] docker network inspect force-systemd-flag-476949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:24:30.048036  442831 network_create.go:284] running [docker network inspect force-systemd-flag-476949] to gather additional debugging logs...
	I1009 19:24:30.048060  442831 cli_runner.go:164] Run: docker network inspect force-systemd-flag-476949
	W1009 19:24:30.073960  442831 cli_runner.go:211] docker network inspect force-systemd-flag-476949 returned with exit code 1
	I1009 19:24:30.073995  442831 network_create.go:287] error running [docker network inspect force-systemd-flag-476949]: docker network inspect force-systemd-flag-476949: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-476949 not found
	I1009 19:24:30.074012  442831 network_create.go:289] output of [docker network inspect force-systemd-flag-476949]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-476949 not found
	
	** /stderr **
	I1009 19:24:30.074456  442831 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:24:30.101153  442831 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:24:30.101591  442831 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:24:30.101843  442831 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:24:30.104821  442831 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a13bd0}
	I1009 19:24:30.104862  442831 network_create.go:124] attempt to create docker network force-systemd-flag-476949 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 19:24:30.104984  442831 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-476949 force-systemd-flag-476949
	I1009 19:24:30.188416  442831 network_create.go:108] docker network force-systemd-flag-476949 192.168.76.0/24 created
	I1009 19:24:30.188451  442831 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-476949" container
	I1009 19:24:30.188526  442831 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:24:30.206612  442831 cli_runner.go:164] Run: docker volume create force-systemd-flag-476949 --label name.minikube.sigs.k8s.io=force-systemd-flag-476949 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:24:30.228052  442831 oci.go:103] Successfully created a docker volume force-systemd-flag-476949
	I1009 19:24:30.228148  442831 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-476949-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-476949 --entrypoint /usr/bin/test -v force-systemd-flag-476949:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:24:32.219628  442831 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-476949-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-476949 --entrypoint /usr/bin/test -v force-systemd-flag-476949:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.991416105s)
	I1009 19:24:32.219659  442831 oci.go:107] Successfully prepared a docker volume force-systemd-flag-476949
	I1009 19:24:32.219703  442831 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:24:32.219720  442831 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:24:32.219799  442831 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-476949:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:24:38.530593  442831 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-476949:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (6.310752021s)
	I1009 19:24:38.530627  442831 kic.go:203] duration metric: took 6.310904786s to extract preloaded images to volume ...
	W1009 19:24:38.530778  442831 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:24:38.530907  442831 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:24:38.654797  442831 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-476949 --name force-systemd-flag-476949 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-476949 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-476949 --network force-systemd-flag-476949 --ip 192.168.76.2 --volume force-systemd-flag-476949:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:24:39.078579  442831 cli_runner.go:164] Run: docker container inspect force-systemd-flag-476949 --format={{.State.Running}}
	I1009 19:24:39.101317  442831 cli_runner.go:164] Run: docker container inspect force-systemd-flag-476949 --format={{.State.Status}}
	I1009 19:24:39.131669  442831 cli_runner.go:164] Run: docker exec force-systemd-flag-476949 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:24:39.192525  442831 oci.go:144] the created container "force-systemd-flag-476949" has a running status.
	I1009 19:24:39.192560  442831 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa...
	I1009 19:24:39.966742  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:24:39.966789  442831 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:24:40.024689  442831 cli_runner.go:164] Run: docker container inspect force-systemd-flag-476949 --format={{.State.Status}}
	I1009 19:24:40.056455  442831 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:24:40.056476  442831 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-476949 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:24:40.140728  442831 cli_runner.go:164] Run: docker container inspect force-systemd-flag-476949 --format={{.State.Status}}
	I1009 19:24:40.166487  442831 machine.go:93] provisionDockerMachine start ...
	I1009 19:24:40.166599  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:40.192667  442831 main.go:141] libmachine: Using SSH client type: native
	I1009 19:24:40.193050  442831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1009 19:24:40.193074  442831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:24:40.361981  442831 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-476949
	
	I1009 19:24:40.362010  442831 ubuntu.go:182] provisioning hostname "force-systemd-flag-476949"
	I1009 19:24:40.362077  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:40.392693  442831 main.go:141] libmachine: Using SSH client type: native
	I1009 19:24:40.393084  442831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1009 19:24:40.393100  442831 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-476949 && echo "force-systemd-flag-476949" | sudo tee /etc/hostname
	I1009 19:24:40.581862  442831 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-476949
	
	I1009 19:24:40.582025  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:40.619257  442831 main.go:141] libmachine: Using SSH client type: native
	I1009 19:24:40.619556  442831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1009 19:24:40.619574  442831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-476949' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-476949/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-476949' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:24:40.807561  442831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:24:40.807590  442831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:24:40.807619  442831 ubuntu.go:190] setting up certificates
	I1009 19:24:40.807629  442831 provision.go:84] configureAuth start
	I1009 19:24:40.807691  442831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-476949
	I1009 19:24:40.839181  442831 provision.go:143] copyHostCerts
	I1009 19:24:40.839218  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:24:40.839248  442831 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:24:40.839255  442831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:24:40.839322  442831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:24:40.839407  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:24:40.839424  442831 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:24:40.839428  442831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:24:40.839456  442831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:24:40.839502  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:24:40.839517  442831 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:24:40.839521  442831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:24:40.839543  442831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:24:40.839592  442831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-476949 san=[127.0.0.1 192.168.76.2 force-systemd-flag-476949 localhost minikube]
	I1009 19:24:41.174457  442831 provision.go:177] copyRemoteCerts
	I1009 19:24:41.174534  442831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:24:41.174589  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:41.194247  442831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa Username:docker}
	I1009 19:24:41.299010  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:24:41.299071  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:24:41.320882  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:24:41.320958  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 19:24:41.343629  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:24:41.343704  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:24:41.368043  442831 provision.go:87] duration metric: took 560.390467ms to configureAuth
	I1009 19:24:41.368123  442831 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:24:41.368340  442831 config.go:182] Loaded profile config "force-systemd-flag-476949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:24:41.368497  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:41.387871  442831 main.go:141] libmachine: Using SSH client type: native
	I1009 19:24:41.388189  442831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33400 <nil> <nil>}
	I1009 19:24:41.388208  442831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:24:41.695977  442831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:24:41.696000  442831 machine.go:96] duration metric: took 1.529494013s to provisionDockerMachine
	I1009 19:24:41.696011  442831 client.go:171] duration metric: took 11.696482916s to LocalClient.Create
	I1009 19:24:41.696024  442831 start.go:167] duration metric: took 11.696550511s to libmachine.API.Create "force-systemd-flag-476949"
	I1009 19:24:41.696031  442831 start.go:293] postStartSetup for "force-systemd-flag-476949" (driver="docker")
	I1009 19:24:41.696041  442831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:24:41.696106  442831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:24:41.696165  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:41.719164  442831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa Username:docker}
	I1009 19:24:41.831090  442831 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:24:41.837100  442831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:24:41.837134  442831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:24:41.837147  442831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:24:41.837204  442831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:24:41.837287  442831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:24:41.837299  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /etc/ssl/certs/2863092.pem
	I1009 19:24:41.837412  442831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:24:41.847290  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:24:41.874283  442831 start.go:296] duration metric: took 178.237494ms for postStartSetup
	I1009 19:24:41.874659  442831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-476949
	I1009 19:24:41.895885  442831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/config.json ...
	I1009 19:24:41.896180  442831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:24:41.896238  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:41.914684  442831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa Username:docker}
	I1009 19:24:42.028177  442831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:24:42.035795  442831 start.go:128] duration metric: took 12.040137363s to createHost
	I1009 19:24:42.035824  442831 start.go:83] releasing machines lock for "force-systemd-flag-476949", held for 12.040270584s
	I1009 19:24:42.035923  442831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-476949
	I1009 19:24:42.058344  442831 ssh_runner.go:195] Run: cat /version.json
	I1009 19:24:42.058527  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:42.058864  442831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:24:42.058924  442831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-476949
	I1009 19:24:42.094248  442831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa Username:docker}
	I1009 19:24:42.101017  442831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33400 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-flag-476949/id_rsa Username:docker}
	I1009 19:24:42.331003  442831 ssh_runner.go:195] Run: systemctl --version
	I1009 19:24:42.338217  442831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:24:42.402154  442831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:24:42.407079  442831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:24:42.407153  442831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:24:42.443278  442831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:24:42.443305  442831 start.go:495] detecting cgroup driver to use...
	I1009 19:24:42.443320  442831 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1009 19:24:42.443427  442831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:24:42.464125  442831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:24:42.480247  442831 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:24:42.480316  442831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:24:42.502185  442831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:24:42.523403  442831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:24:42.678497  442831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:24:42.837561  442831 docker.go:234] disabling docker service ...
	I1009 19:24:42.837671  442831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:24:42.864261  442831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:24:42.879491  442831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:24:43.023815  442831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:24:43.185966  442831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:24:43.216113  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:24:43.248462  442831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:24:43.248538  442831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.258068  442831 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:24:43.258148  442831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.267366  442831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.276658  442831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.285808  442831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:24:43.294332  442831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.304137  442831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.318036  442831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:24:43.327443  442831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:24:43.335932  442831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:24:43.344033  442831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:24:43.498006  442831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:24:43.653148  442831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:24:43.653221  442831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:24:43.657456  442831 start.go:563] Will wait 60s for crictl version
	I1009 19:24:43.657522  442831 ssh_runner.go:195] Run: which crictl
	I1009 19:24:43.661517  442831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:24:43.689891  442831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:24:43.689977  442831 ssh_runner.go:195] Run: crio --version
	I1009 19:24:43.722348  442831 ssh_runner.go:195] Run: crio --version
	I1009 19:24:43.764074  442831 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:24:43.767086  442831 cli_runner.go:164] Run: docker network inspect force-systemd-flag-476949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:24:43.786349  442831 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:24:43.790767  442831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:24:43.803820  442831 kubeadm.go:883] updating cluster {Name:force-systemd-flag-476949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-476949 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:24:43.803929  442831 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:24:43.803987  442831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:24:43.855736  442831 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:24:43.855756  442831 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:24:43.855809  442831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:24:43.893057  442831 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:24:43.893128  442831 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:24:43.893149  442831 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:24:43.893283  442831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-476949 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-476949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:24:43.893400  442831 ssh_runner.go:195] Run: crio config
	I1009 19:24:43.985134  442831 cni.go:84] Creating CNI manager for ""
	I1009 19:24:43.985210  442831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:24:43.985244  442831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:24:43.985300  442831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-476949 NodeName:force-systemd-flag-476949 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:24:43.985495  442831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-476949"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:24:43.985617  442831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:24:43.994573  442831 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:24:43.994695  442831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:24:44.003135  442831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1009 19:24:44.019420  442831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:24:44.037331  442831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1009 19:24:44.055312  442831 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:24:44.061104  442831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:24:44.074258  442831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:24:44.245039  442831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:24:44.266297  442831 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949 for IP: 192.168.76.2
	I1009 19:24:44.266358  442831 certs.go:195] generating shared ca certs ...
	I1009 19:24:44.266430  442831 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:44.266674  442831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:24:44.266774  442831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:24:44.266811  442831 certs.go:257] generating profile certs ...
	I1009 19:24:44.266928  442831 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/client.key
	I1009 19:24:44.266980  442831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/client.crt with IP's: []
	I1009 19:24:45.200250  442831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/client.crt ...
	I1009 19:24:45.200342  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/client.crt: {Name:mk87fbd9326f2adda0c63ca68bf892275e24bb32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:45.200649  442831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/client.key ...
	I1009 19:24:45.200702  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/client.key: {Name:mkb2402ae8552ec9f1ecf03a427fcca86c33fc57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:45.200906  442831 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key.713197a7
	I1009 19:24:45.200966  442831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt.713197a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:24:46.568593  442831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt.713197a7 ...
	I1009 19:24:46.568670  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt.713197a7: {Name:mkcadf40d5889cdff5c6e53908554437d06b8ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:46.568906  442831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key.713197a7 ...
	I1009 19:24:46.568921  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key.713197a7: {Name:mk0110e3874010df0ff239a9bc454dfc8e06c86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:46.568998  442831 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt.713197a7 -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt
	I1009 19:24:46.569074  442831 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key.713197a7 -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key
	I1009 19:24:46.569150  442831 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.key
	I1009 19:24:46.569163  442831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.crt with IP's: []
	I1009 19:24:47.133868  442831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.crt ...
	I1009 19:24:47.133946  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.crt: {Name:mkbac788d0f797d3185e47aeb11b626eeb37c908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:47.134176  442831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.key ...
	I1009 19:24:47.134217  442831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.key: {Name:mka46992f1f166c6547a546a2d8bfbba99727020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:24:47.134351  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:24:47.134396  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:24:47.134425  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:24:47.134467  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:24:47.134503  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:24:47.134533  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:24:47.134580  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:24:47.134613  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:24:47.134706  442831 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:24:47.134766  442831 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:24:47.134794  442831 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:24:47.134854  442831 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:24:47.134903  442831 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:24:47.134962  442831 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:24:47.135031  442831 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:24:47.135093  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /usr/share/ca-certificates/2863092.pem
	I1009 19:24:47.135131  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:24:47.135163  442831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem -> /usr/share/ca-certificates/286309.pem
	I1009 19:24:47.135802  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:24:47.160062  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:24:47.185027  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:24:47.203390  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:24:47.222900  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:24:47.242931  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:24:47.263322  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:24:47.284121  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-flag-476949/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:24:47.304773  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:24:47.325223  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:24:47.345284  442831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:24:47.365559  442831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:24:47.397784  442831 ssh_runner.go:195] Run: openssl version
	I1009 19:24:47.405093  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:24:47.421082  442831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:24:47.424960  442831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:24:47.425069  442831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:24:47.468871  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:24:47.478362  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:24:47.487741  442831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:24:47.492091  442831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:24:47.492213  442831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:24:47.534026  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:24:47.543368  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:24:47.552681  442831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:24:47.557394  442831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:24:47.557512  442831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:24:47.599184  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:24:47.608459  442831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:24:47.613218  442831 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:24:47.613324  442831 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-476949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-476949 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:24:47.613435  442831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:24:47.613524  442831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:24:47.699700  442831 cri.go:89] found id: ""
	I1009 19:24:47.699789  442831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:24:47.713640  442831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:24:47.722735  442831 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:24:47.722853  442831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:24:47.733978  442831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:24:47.734051  442831 kubeadm.go:157] found existing configuration files:
	
	I1009 19:24:47.734164  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:24:47.743272  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:24:47.743390  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:24:47.751943  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:24:47.761379  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:24:47.761497  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:24:47.769958  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:24:47.779053  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:24:47.779175  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:24:47.787560  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:24:47.796790  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:24:47.796934  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:24:47.805389  442831 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:24:47.856739  442831 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:24:47.858468  442831 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:24:47.893262  442831 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:24:47.893429  442831 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:24:47.893501  442831 kubeadm.go:318] OS: Linux
	I1009 19:24:47.893582  442831 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:24:47.893680  442831 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:24:47.893754  442831 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:24:47.893840  442831 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:24:47.893930  442831 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:24:47.894016  442831 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:24:47.894100  442831 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:24:47.894259  442831 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:24:47.894337  442831 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:24:47.964587  442831 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:24:47.964768  442831 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:24:47.964910  442831 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:24:47.976737  442831 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:24:47.981454  442831 out.go:252]   - Generating certificates and keys ...
	I1009 19:24:47.981615  442831 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:24:47.981721  442831 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:24:48.488265  442831 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:24:49.505574  442831 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:24:50.294506  442831 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:24:51.093084  442831 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:24:51.464687  442831 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:24:51.468834  442831 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:24:51.729452  442831 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:24:51.730184  442831 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:24:52.143477  442831 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:24:52.378411  442831 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:24:52.633844  442831 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:24:52.634450  442831 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:24:53.553809  442831 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:24:54.235170  442831 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:24:54.718884  442831 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:24:54.874505  442831 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:24:55.752569  442831 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:24:55.752684  442831 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:24:55.756543  442831 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:24:55.760145  442831 out.go:252]   - Booting up control plane ...
	I1009 19:24:55.760276  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:24:55.760372  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:24:55.763633  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:24:55.784662  442831 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:24:55.784798  442831 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:24:55.791549  442831 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:24:55.791854  442831 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:24:55.791922  442831 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:24:55.942776  442831 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:24:55.942918  442831 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:24:57.941902  442831 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001373237s
	I1009 19:24:57.945227  442831 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:24:57.945560  442831 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 19:24:57.945751  442831 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:24:57.945853  442831 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:28:57.946371  442831 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000983391s
	I1009 19:28:57.946475  442831 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839783s
	I1009 19:28:57.946716  442831 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001087425s
	I1009 19:28:57.946730  442831 kubeadm.go:318] 
	I1009 19:28:57.946820  442831 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:28:57.946901  442831 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:28:57.946993  442831 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:28:57.947215  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:28:57.947299  442831 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:28:57.947380  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:28:57.947384  442831 kubeadm.go:318] 
	I1009 19:28:57.951730  442831 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:28:57.951972  442831 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:28:57.952091  442831 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:28:57.952747  442831 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:28:57.952828  442831 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:28:57.952982  442831 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.001373237s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000983391s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839783s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001087425s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.001373237s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000983391s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839783s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001087425s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:28:57.953072  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:28:58.494213  442831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:58.508656  442831 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:28:58.508725  442831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:28:58.516944  442831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:28:58.516965  442831 kubeadm.go:157] found existing configuration files:
	
	I1009 19:28:58.517024  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:28:58.525403  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:28:58.525521  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:28:58.533702  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:28:58.541895  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:28:58.541968  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:28:58.550037  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:28:58.558657  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:28:58.558724  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:28:58.566489  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:28:58.574710  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:28:58.574775  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:28:58.584960  442831 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:28:58.625678  442831 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:28:58.625940  442831 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:28:58.650104  442831 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:28:58.650200  442831 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:28:58.650238  442831 kubeadm.go:318] OS: Linux
	I1009 19:28:58.650283  442831 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:28:58.650331  442831 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:28:58.650379  442831 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:28:58.650426  442831 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:28:58.650474  442831 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:28:58.650522  442831 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:28:58.650574  442831 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:28:58.650621  442831 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:28:58.650667  442831 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:28:58.715382  442831 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:28:58.715593  442831 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:28:58.715744  442831 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:28:58.726595  442831 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:28:58.733108  442831 out.go:252]   - Generating certificates and keys ...
	I1009 19:28:58.733207  442831 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:28:58.733272  442831 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:28:58.733347  442831 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:28:58.733407  442831 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:28:58.733477  442831 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:28:58.733530  442831 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:28:58.733593  442831 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:28:58.733654  442831 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:28:58.733728  442831 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:28:58.733800  442831 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:28:58.733838  442831 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:28:58.733895  442831 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:28:59.255555  442831 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:29:00.155314  442831 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:29:00.719237  442831 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:29:01.723934  442831 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:29:01.833692  442831 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:29:01.834692  442831 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:29:01.837574  442831 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:29:01.842002  442831 out.go:252]   - Booting up control plane ...
	I1009 19:29:01.842110  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:29:01.842208  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:29:01.843272  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:29:01.859942  442831 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:29:01.860379  442831 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:29:01.869508  442831 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:29:01.869607  442831 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:29:01.869646  442831 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:29:02.013824  442831 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:29:02.013949  442831 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:29:03.515047  442831 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501588605s
	I1009 19:29:03.518953  442831 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:29:03.519059  442831 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 19:29:03.519326  442831 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:29:03.519422  442831 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:33:03.520031  442831 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	I1009 19:33:03.520259  442831 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	I1009 19:33:03.520678  442831 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	I1009 19:33:03.520697  442831 kubeadm.go:318] 
	I1009 19:33:03.520792  442831 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:33:03.520877  442831 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:33:03.520975  442831 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:33:03.521074  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:33:03.521152  442831 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:33:03.521239  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:33:03.521244  442831 kubeadm.go:318] 
	I1009 19:33:03.524674  442831 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:33:03.524961  442831 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:33:03.525098  442831 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:33:03.525730  442831 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:33:03.525801  442831 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:33:03.525863  442831 kubeadm.go:402] duration metric: took 8m15.912542815s to StartCluster
	I1009 19:33:03.525898  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:33:03.525960  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:33:03.551174  442831 cri.go:89] found id: ""
	I1009 19:33:03.551207  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.551216  442831 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:33:03.551223  442831 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:33:03.551282  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:33:03.576975  442831 cri.go:89] found id: ""
	I1009 19:33:03.577002  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.577012  442831 logs.go:284] No container was found matching "etcd"
	I1009 19:33:03.577019  442831 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:33:03.577082  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:33:03.604738  442831 cri.go:89] found id: ""
	I1009 19:33:03.604761  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.604770  442831 logs.go:284] No container was found matching "coredns"
	I1009 19:33:03.604776  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:33:03.604835  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:33:03.631189  442831 cri.go:89] found id: ""
	I1009 19:33:03.631215  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.631223  442831 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:33:03.631231  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:33:03.631295  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:33:03.657842  442831 cri.go:89] found id: ""
	I1009 19:33:03.657871  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.657893  442831 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:33:03.657900  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:33:03.657963  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:33:03.683214  442831 cri.go:89] found id: ""
	I1009 19:33:03.683240  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.683248  442831 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:33:03.683255  442831 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:33:03.683317  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:33:03.710197  442831 cri.go:89] found id: ""
	I1009 19:33:03.710225  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.710233  442831 logs.go:284] No container was found matching "kindnet"
	I1009 19:33:03.710243  442831 logs.go:123] Gathering logs for kubelet ...
	I1009 19:33:03.710254  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:33:03.799584  442831 logs.go:123] Gathering logs for dmesg ...
	I1009 19:33:03.799624  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:33:03.816616  442831 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:33:03.816742  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:33:03.884133  442831 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:33:03.874860    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.875488    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877118    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877640    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.879842    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:33:03.874860    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.875488    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877118    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877640    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.879842    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:33:03.884159  442831 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:33:03.884172  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:33:03.958432  442831 logs.go:123] Gathering logs for container status ...
	I1009 19:33:03.958470  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:33:03.986741  442831 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:33:03.986802  442831 out.go:285] * 
	* 
	W1009 19:33:03.986885  442831 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:33:03.986933  442831 out.go:285] * 
	* 
	W1009 19:33:03.989100  442831 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:33:03.998288  442831 out.go:203] 
	W1009 19:33:04.002574  442831 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:33:04.002606  442831 out.go:285] * 
	* 
	I1009 19:33:04.007524  442831 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-476949 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-476949 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-09 19:33:04.389103458 +0000 UTC m=+3968.199816206
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-476949
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-476949:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a",
	        "Created": "2025-10-09T19:24:38.676700652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 444118,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:24:38.74371813Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a/hostname",
	        "HostsPath": "/var/lib/docker/containers/40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a/hosts",
	        "LogPath": "/var/lib/docker/containers/40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a/40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a-json.log",
	        "Name": "/force-systemd-flag-476949",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-476949:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-476949",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40dd75be64ad2132c73d8814a3be5636cec7551ec5d99d14663e857e7e56840a",
	                "LowerDir": "/var/lib/docker/overlay2/37abe048608e9222cdc7754dd9f6847b354bf65ea2c8d214f8d20e91a4da5bf8-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37abe048608e9222cdc7754dd9f6847b354bf65ea2c8d214f8d20e91a4da5bf8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37abe048608e9222cdc7754dd9f6847b354bf65ea2c8d214f8d20e91a4da5bf8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37abe048608e9222cdc7754dd9f6847b354bf65ea2c8d214f8d20e91a4da5bf8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-476949",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-476949/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-476949",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-476949",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-476949",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a4c3ff3c486f56fea30b580348a9430ad0a43cd634a35c1b7f9c8f92010bbae",
	            "SandboxKey": "/var/run/docker/netns/4a4c3ff3c486",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-476949": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:98:33:95:4e:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ec14a7a0bd9d19b19472938b6aa3c0c375c9d10c0a31538aba050fc1848b55f5",
	                    "EndpointID": "4ef5d363f68b313d319a4987bb69bcd235499ac2dfff828686dc2b7822868345",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-476949",
	                        "40dd75be64ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-476949 -n force-systemd-flag-476949
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-476949 -n force-systemd-flag-476949: exit status 6 (299.601996ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:33:04.690791  454282 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-476949" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-476949 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-224541 sudo systemctl cat kubelet --no-pager                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status docker --all --full --no-pager                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat docker --no-pager                                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/docker/daemon.json                                                          │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo docker system info                                                                   │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cri-dockerd --version                                                                │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat containerd --no-pager                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/containerd/config.toml                                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo containerd config dump                                                               │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status crio --all --full --no-pager                                        │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat crio --no-pager                                                        │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo crio config                                                                          │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ delete  │ -p cilium-224541                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ start   │ -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ force-systemd-flag-476949 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:26:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:26:37.554291  450527 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:26:37.555123  450527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:37.555139  450527 out.go:374] Setting ErrFile to fd 2...
	I1009 19:26:37.555146  450527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:37.555447  450527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:26:37.555892  450527 out.go:368] Setting JSON to false
	I1009 19:26:37.556776  450527 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7749,"bootTime":1760030249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:26:37.556847  450527 start.go:141] virtualization:  
	I1009 19:26:37.560242  450527 out.go:179] * [force-systemd-env-028248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:26:37.564280  450527 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:26:37.564463  450527 notify.go:220] Checking for updates...
	I1009 19:26:37.568524  450527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:26:37.571531  450527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:26:37.574339  450527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:26:37.577143  450527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:26:37.580114  450527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 19:26:37.583735  450527 config.go:182] Loaded profile config "force-systemd-flag-476949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:37.583845  450527 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:26:37.617450  450527 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:26:37.617645  450527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:37.679391  450527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:26:37.670570324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:37.679495  450527 docker.go:318] overlay module found
	I1009 19:26:37.682765  450527 out.go:179] * Using the docker driver based on user configuration
	I1009 19:26:37.685610  450527 start.go:305] selected driver: docker
	I1009 19:26:37.685636  450527 start.go:925] validating driver "docker" against <nil>
	I1009 19:26:37.685652  450527 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:26:37.686409  450527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:37.742756  450527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:26:37.733992771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:37.742932  450527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:26:37.743160  450527 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:26:37.746179  450527 out.go:179] * Using Docker driver with root privileges
	I1009 19:26:37.749031  450527 cni.go:84] Creating CNI manager for ""
	I1009 19:26:37.749114  450527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:26:37.749127  450527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:26:37.749210  450527 start.go:349] cluster config:
	{Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:26:37.752350  450527 out.go:179] * Starting "force-systemd-env-028248" primary control-plane node in "force-systemd-env-028248" cluster
	I1009 19:26:37.755361  450527 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:26:37.758263  450527 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:26:37.761111  450527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:37.761165  450527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:26:37.761178  450527 cache.go:64] Caching tarball of preloaded images
	I1009 19:26:37.761191  450527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:26:37.761261  450527 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:26:37.761272  450527 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:26:37.761380  450527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/config.json ...
	I1009 19:26:37.761398  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/config.json: {Name:mk3d04a15b3ddf3f3f99830bc4f72da6874e6a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:37.780791  450527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:26:37.780817  450527 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:26:37.780836  450527 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:26:37.780868  450527 start.go:360] acquireMachinesLock for force-systemd-env-028248: {Name:mkc6e3924168d990b2ddb75c42f0bb8c550df681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:26:37.781002  450527 start.go:364] duration metric: took 104.822µs to acquireMachinesLock for "force-systemd-env-028248"
	I1009 19:26:37.781040  450527 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:26:37.781118  450527 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:26:37.784545  450527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:26:37.784780  450527 start.go:159] libmachine.API.Create for "force-systemd-env-028248" (driver="docker")
	I1009 19:26:37.784827  450527 client.go:168] LocalClient.Create starting
	I1009 19:26:37.784908  450527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:26:37.784948  450527 main.go:141] libmachine: Decoding PEM data...
	I1009 19:26:37.784964  450527 main.go:141] libmachine: Parsing certificate...
	I1009 19:26:37.785017  450527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:26:37.785038  450527 main.go:141] libmachine: Decoding PEM data...
	I1009 19:26:37.785054  450527 main.go:141] libmachine: Parsing certificate...
	I1009 19:26:37.785427  450527 cli_runner.go:164] Run: docker network inspect force-systemd-env-028248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:26:37.801751  450527 cli_runner.go:211] docker network inspect force-systemd-env-028248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:26:37.801836  450527 network_create.go:284] running [docker network inspect force-systemd-env-028248] to gather additional debugging logs...
	I1009 19:26:37.801859  450527 cli_runner.go:164] Run: docker network inspect force-systemd-env-028248
	W1009 19:26:37.818696  450527 cli_runner.go:211] docker network inspect force-systemd-env-028248 returned with exit code 1
	I1009 19:26:37.818735  450527 network_create.go:287] error running [docker network inspect force-systemd-env-028248]: docker network inspect force-systemd-env-028248: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-028248 not found
	I1009 19:26:37.818751  450527 network_create.go:289] output of [docker network inspect force-systemd-env-028248]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-028248 not found
	
	** /stderr **
	I1009 19:26:37.818848  450527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:26:37.834339  450527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:26:37.834694  450527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:26:37.834918  450527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:26:37.835190  450527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ec14a7a0bd9d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:10:66:c1:5a:8a} reservation:<nil>}
	I1009 19:26:37.835609  450527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a169a0}
	I1009 19:26:37.835631  450527 network_create.go:124] attempt to create docker network force-systemd-env-028248 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 19:26:37.835686  450527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-028248 force-systemd-env-028248
	I1009 19:26:37.894341  450527 network_create.go:108] docker network force-systemd-env-028248 192.168.85.0/24 created
	I1009 19:26:37.894376  450527 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-028248" container
	I1009 19:26:37.894466  450527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:26:37.910480  450527 cli_runner.go:164] Run: docker volume create force-systemd-env-028248 --label name.minikube.sigs.k8s.io=force-systemd-env-028248 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:26:37.929231  450527 oci.go:103] Successfully created a docker volume force-systemd-env-028248
	I1009 19:26:37.929323  450527 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-028248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-028248 --entrypoint /usr/bin/test -v force-systemd-env-028248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:26:38.495704  450527 oci.go:107] Successfully prepared a docker volume force-systemd-env-028248
	I1009 19:26:38.495768  450527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:38.495778  450527 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:26:38.495856  450527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-028248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:26:42.977608  450527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-028248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.481690472s)
	I1009 19:26:42.977646  450527 kic.go:203] duration metric: took 4.481864398s to extract preloaded images to volume ...
	W1009 19:26:42.977820  450527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:26:42.977944  450527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:26:43.033558  450527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-028248 --name force-systemd-env-028248 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-028248 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-028248 --network force-systemd-env-028248 --ip 192.168.85.2 --volume force-systemd-env-028248:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:26:43.339593  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Running}}
	I1009 19:26:43.364896  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Status}}
	I1009 19:26:43.390310  450527 cli_runner.go:164] Run: docker exec force-systemd-env-028248 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:26:43.441929  450527 oci.go:144] the created container "force-systemd-env-028248" has a running status.
	I1009 19:26:43.441969  450527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa...
	I1009 19:26:43.926328  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:26:43.926377  450527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:26:43.945839  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Status}}
	I1009 19:26:43.963340  450527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:26:43.963364  450527 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-028248 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:26:44.007712  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Status}}
	I1009 19:26:44.027190  450527 machine.go:93] provisionDockerMachine start ...
	I1009 19:26:44.027305  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:44.044790  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:44.045165  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:44.045182  450527 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:26:44.045851  450527 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:26:47.193883  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-028248
	
	I1009 19:26:47.193912  450527 ubuntu.go:182] provisioning hostname "force-systemd-env-028248"
	I1009 19:26:47.193976  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:47.212136  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:47.212453  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:47.212473  450527 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-028248 && echo "force-systemd-env-028248" | sudo tee /etc/hostname
	I1009 19:26:47.368223  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-028248
	
	I1009 19:26:47.368429  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:47.387661  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:47.387969  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:47.387991  450527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-028248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-028248/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-028248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:26:47.530815  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:26:47.530840  450527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:26:47.530873  450527 ubuntu.go:190] setting up certificates
	I1009 19:26:47.530882  450527 provision.go:84] configureAuth start
	I1009 19:26:47.530944  450527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-028248
	I1009 19:26:47.551120  450527 provision.go:143] copyHostCerts
	I1009 19:26:47.551168  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:26:47.551210  450527 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:26:47.551223  450527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:26:47.551313  450527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:26:47.551406  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:26:47.551430  450527 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:26:47.551438  450527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:26:47.551471  450527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:26:47.551519  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:26:47.551542  450527 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:26:47.551546  450527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:26:47.551579  450527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:26:47.551635  450527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-028248 san=[127.0.0.1 192.168.85.2 force-systemd-env-028248 localhost minikube]
	I1009 19:26:48.095152  450527 provision.go:177] copyRemoteCerts
	I1009 19:26:48.095223  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:26:48.095277  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.113838  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.213819  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:26:48.213877  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1009 19:26:48.231592  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:26:48.231657  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:26:48.249837  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:26:48.249899  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:26:48.268546  450527 provision.go:87] duration metric: took 737.649667ms to configureAuth
	I1009 19:26:48.268630  450527 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:26:48.268831  450527 config.go:182] Loaded profile config "force-systemd-env-028248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:48.268979  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.287143  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:48.287468  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:48.287489  450527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:26:48.538110  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:26:48.538157  450527 machine.go:96] duration metric: took 4.510942055s to provisionDockerMachine
	I1009 19:26:48.538169  450527 client.go:171] duration metric: took 10.753330667s to LocalClient.Create
	I1009 19:26:48.538184  450527 start.go:167] duration metric: took 10.753404998s to libmachine.API.Create "force-systemd-env-028248"
	I1009 19:26:48.538196  450527 start.go:293] postStartSetup for "force-systemd-env-028248" (driver="docker")
	I1009 19:26:48.538207  450527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:26:48.538292  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:26:48.538339  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.560800  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.662276  450527 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:26:48.665814  450527 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:26:48.665841  450527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:26:48.665852  450527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:26:48.665914  450527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:26:48.666001  450527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:26:48.666012  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /etc/ssl/certs/2863092.pem
	I1009 19:26:48.666112  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:26:48.673594  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:26:48.691210  450527 start.go:296] duration metric: took 152.998698ms for postStartSetup
	I1009 19:26:48.691627  450527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-028248
	I1009 19:26:48.708521  450527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/config.json ...
	I1009 19:26:48.708809  450527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:26:48.708865  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.725261  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.823363  450527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:26:48.828331  450527 start.go:128] duration metric: took 11.047198726s to createHost
	I1009 19:26:48.828355  450527 start.go:83] releasing machines lock for "force-systemd-env-028248", held for 11.047337099s
	I1009 19:26:48.828428  450527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-028248
	I1009 19:26:48.847297  450527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:26:48.847377  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.847590  450527 ssh_runner.go:195] Run: cat /version.json
	I1009 19:26:48.847636  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.874211  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.875497  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:49.066607  450527 ssh_runner.go:195] Run: systemctl --version
	I1009 19:26:49.073179  450527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:26:49.108992  450527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:26:49.113399  450527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:26:49.113468  450527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:26:49.142838  450527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:26:49.142861  450527 start.go:495] detecting cgroup driver to use...
	I1009 19:26:49.142878  450527 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1009 19:26:49.142929  450527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:26:49.160671  450527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:26:49.173632  450527 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:26:49.173719  450527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:26:49.192520  450527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:26:49.212480  450527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:26:49.338187  450527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:26:49.468682  450527 docker.go:234] disabling docker service ...
	I1009 19:26:49.468756  450527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:26:49.491683  450527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:26:49.505251  450527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:26:49.622419  450527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:26:49.737383  450527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:26:49.750310  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:26:49.764859  450527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:26:49.764948  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.774045  450527 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:26:49.774200  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.783606  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.793061  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.802423  450527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:26:49.810734  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.819969  450527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.833867  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.842782  450527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:26:49.850426  450527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:26:49.858083  450527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:49.981982  450527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:26:50.111761  450527 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:26:50.111885  450527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:26:50.116517  450527 start.go:563] Will wait 60s for crictl version
	I1009 19:26:50.116610  450527 ssh_runner.go:195] Run: which crictl
	I1009 19:26:50.120857  450527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:26:50.146752  450527 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:26:50.146881  450527 ssh_runner.go:195] Run: crio --version
	I1009 19:26:50.174603  450527 ssh_runner.go:195] Run: crio --version
	I1009 19:26:50.208798  450527 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:26:50.211745  450527 cli_runner.go:164] Run: docker network inspect force-systemd-env-028248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:26:50.228270  450527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:26:50.232176  450527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:26:50.241954  450527 kubeadm.go:883] updating cluster {Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:26:50.242063  450527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:50.242182  450527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:50.279629  450527 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:50.279653  450527 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:26:50.279708  450527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:50.304560  450527 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:50.304585  450527 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:26:50.304594  450527 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:26:50.304691  450527 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-028248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:26:50.304773  450527 ssh_runner.go:195] Run: crio config
	I1009 19:26:50.374777  450527 cni.go:84] Creating CNI manager for ""
	I1009 19:26:50.374802  450527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:26:50.374822  450527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:26:50.374846  450527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-028248 NodeName:force-systemd-env-028248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:26:50.374986  450527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-028248"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:26:50.375067  450527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:26:50.382997  450527 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:26:50.383077  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:26:50.390612  450527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1009 19:26:50.403006  450527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:26:50.415916  450527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1009 19:26:50.428814  450527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:26:50.432299  450527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:26:50.442196  450527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:50.571246  450527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:26:50.587464  450527 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248 for IP: 192.168.85.2
	I1009 19:26:50.587527  450527 certs.go:195] generating shared ca certs ...
	I1009 19:26:50.587574  450527 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:50.587748  450527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:26:50.587834  450527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:26:50.587866  450527 certs.go:257] generating profile certs ...
	I1009 19:26:50.587952  450527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.key
	I1009 19:26:50.587999  450527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.crt with IP's: []
	I1009 19:26:50.922816  450527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.crt ...
	I1009 19:26:50.922854  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.crt: {Name:mk27bba1e7650d93ff22d3cf6b06c6a6b1eb51cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:50.923085  450527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.key ...
	I1009 19:26:50.923112  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.key: {Name:mk4bc4be0d798e53acc6c7c190fd8c0541b2a659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:50.923215  450527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f
	I1009 19:26:50.923236  450527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 19:26:51.427955  450527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f ...
	I1009 19:26:51.428035  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f: {Name:mkf1356060b85528b15445892fe19bd981ecb30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.428295  450527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f ...
	I1009 19:26:51.428333  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f: {Name:mkd3a9a265e5e7c39edf51fb3023d6b98cb5961b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.428483  450527 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt
	I1009 19:26:51.428622  450527 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key
	I1009 19:26:51.428727  450527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key
	I1009 19:26:51.428765  450527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt with IP's: []
	I1009 19:26:51.745104  450527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt ...
	I1009 19:26:51.745140  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt: {Name:mk5fe89f12790d13535a3c4a72ed796147893e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.745349  450527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key ...
	I1009 19:26:51.745364  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key: {Name:mk5fee4b0f54ae38f23f3e4b5f3c465c98c0d811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.745456  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:26:51.745476  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:26:51.745489  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:26:51.745505  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:26:51.745522  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:26:51.745540  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:26:51.745557  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:26:51.745569  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:26:51.745622  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:26:51.745672  450527 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:26:51.745685  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:26:51.745711  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:26:51.745740  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:26:51.745765  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:26:51.745810  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:26:51.745841  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem -> /usr/share/ca-certificates/286309.pem
	I1009 19:26:51.745856  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /usr/share/ca-certificates/2863092.pem
	I1009 19:26:51.745867  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:51.746475  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:26:51.766059  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:26:51.784246  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:26:51.802855  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:26:51.820360  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:26:51.838780  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:26:51.857511  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:26:51.875251  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:26:51.892503  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:26:51.910041  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:26:51.927462  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:26:51.945224  450527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:26:51.957972  450527 ssh_runner.go:195] Run: openssl version
	I1009 19:26:51.964148  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:26:51.972266  450527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:26:51.975958  450527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:26:51.976025  450527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:26:52.022410  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:26:52.031254  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:26:52.039889  450527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:26:52.044142  450527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:26:52.044267  450527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:26:52.085930  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:26:52.094485  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:26:52.103301  450527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:52.107357  450527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:52.107475  450527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:52.148781  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:26:52.157041  450527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:26:52.161204  450527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:26:52.161256  450527 kubeadm.go:400] StartCluster: {Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:26:52.161335  450527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:26:52.161401  450527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:26:52.190935  450527 cri.go:89] found id: ""
	I1009 19:26:52.191078  450527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:26:52.199436  450527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:26:52.207508  450527 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:26:52.207652  450527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:26:52.216274  450527 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:26:52.216294  450527 kubeadm.go:157] found existing configuration files:
	
	I1009 19:26:52.216371  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:26:52.224136  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:26:52.224210  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:26:52.231689  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:26:52.239802  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:26:52.239918  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:26:52.247856  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:26:52.255902  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:26:52.255973  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:26:52.263733  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:26:52.271816  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:26:52.271891  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:26:52.279760  450527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:26:52.320038  450527 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:26:52.320234  450527 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:26:52.343384  450527 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:26:52.343463  450527 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:26:52.343506  450527 kubeadm.go:318] OS: Linux
	I1009 19:26:52.343558  450527 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:26:52.343613  450527 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:26:52.343665  450527 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:26:52.343719  450527 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:26:52.343776  450527 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:26:52.343832  450527 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:26:52.343883  450527 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:26:52.343942  450527 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:26:52.343994  450527 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:26:52.415057  450527 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:26:52.415182  450527 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:26:52.415333  450527 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:26:52.422741  450527 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:26:52.429708  450527 out.go:252]   - Generating certificates and keys ...
	I1009 19:26:52.429896  450527 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:26:52.430004  450527 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:26:52.688705  450527 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:26:52.898446  450527 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:26:53.349449  450527 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:26:53.685555  450527 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:26:54.850227  450527 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:26:54.850515  450527 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:26:55.394073  450527 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:26:55.394423  450527 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:26:55.475717  450527 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:26:55.802856  450527 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:26:55.869656  450527 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:26:55.870171  450527 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:26:56.439916  450527 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:26:56.951401  450527 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:26:57.243854  450527 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:26:57.378782  450527 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:26:57.827801  450527 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:26:57.828562  450527 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:26:57.831451  450527 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:26:57.834949  450527 out.go:252]   - Booting up control plane ...
	I1009 19:26:57.835062  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:26:57.835143  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:26:57.835757  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:26:57.853964  450527 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:26:57.854079  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:26:57.863285  450527 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:26:57.864133  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:26:57.865313  450527 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:26:58.010687  450527 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:26:58.010817  450527 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:00.024159  450527 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.014833697s
	I1009 19:27:00.028542  450527 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:00.029343  450527 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:27:00.029766  450527 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:00.030172  450527 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:28:57.946371  442831 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000983391s
	I1009 19:28:57.946475  442831 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000839783s
	I1009 19:28:57.946716  442831 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001087425s
	I1009 19:28:57.946730  442831 kubeadm.go:318] 
	I1009 19:28:57.946820  442831 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:28:57.946901  442831 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:28:57.946993  442831 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:28:57.947215  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:28:57.947299  442831 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:28:57.947380  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:28:57.947384  442831 kubeadm.go:318] 
	I1009 19:28:57.951730  442831 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:28:57.951972  442831 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:28:57.952091  442831 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:28:57.952747  442831 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:28:57.952828  442831 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:28:57.952982  442831 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-476949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.001373237s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000983391s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000839783s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001087425s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:28:57.953072  442831 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:28:58.494213  442831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:58.508656  442831 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:28:58.508725  442831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:28:58.516944  442831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:28:58.516965  442831 kubeadm.go:157] found existing configuration files:
	
	I1009 19:28:58.517024  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:28:58.525403  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:28:58.525521  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:28:58.533702  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:28:58.541895  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:28:58.541968  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:28:58.550037  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:28:58.558657  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:28:58.558724  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:28:58.566489  442831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:28:58.574710  442831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:28:58.574775  442831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:28:58.584960  442831 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:28:58.625678  442831 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:28:58.625940  442831 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:28:58.650104  442831 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:28:58.650200  442831 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:28:58.650238  442831 kubeadm.go:318] OS: Linux
	I1009 19:28:58.650283  442831 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:28:58.650331  442831 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:28:58.650379  442831 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:28:58.650426  442831 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:28:58.650474  442831 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:28:58.650522  442831 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:28:58.650574  442831 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:28:58.650621  442831 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:28:58.650667  442831 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:28:58.715382  442831 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:28:58.715593  442831 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:28:58.715744  442831 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:28:58.726595  442831 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:28:58.733108  442831 out.go:252]   - Generating certificates and keys ...
	I1009 19:28:58.733207  442831 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:28:58.733272  442831 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:28:58.733347  442831 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:28:58.733407  442831 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:28:58.733477  442831 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:28:58.733530  442831 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:28:58.733593  442831 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:28:58.733654  442831 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:28:58.733728  442831 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:28:58.733800  442831 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:28:58.733838  442831 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:28:58.733895  442831 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:28:59.255555  442831 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:29:00.155314  442831 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:29:00.719237  442831 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:29:01.723934  442831 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:29:01.833692  442831 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:29:01.834692  442831 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:29:01.837574  442831 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:29:01.842002  442831 out.go:252]   - Booting up control plane ...
	I1009 19:29:01.842110  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:29:01.842208  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:29:01.843272  442831 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:29:01.859942  442831 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:29:01.860379  442831 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:29:01.869508  442831 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:29:01.869607  442831 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:29:01.869646  442831 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:29:02.013824  442831 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:29:02.013949  442831 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:29:03.515047  442831 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501588605s
	I1009 19:29:03.518953  442831 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:29:03.519059  442831 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 19:29:03.519326  442831 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:29:03.519422  442831 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:00.030731  450527 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000046632s
	I1009 19:31:00.030843  450527 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000028121s
	I1009 19:31:00.031809  450527 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000940717s
	I1009 19:31:00.031831  450527 kubeadm.go:318] 
	I1009 19:31:00.031927  450527 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:00.032014  450527 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:00.032111  450527 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:00.032315  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:00.032398  450527 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:00.032481  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:00.032486  450527 kubeadm.go:318] 
	I1009 19:31:00.039044  450527 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:31:00.039299  450527 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:31:00.039412  450527 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:00.048645  450527 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1009 19:31:00.048749  450527 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:00.048955  450527 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.014833697s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000046632s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000028121s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000940717s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:00.049049  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:00.709711  450527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:00.723247  450527 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:00.723308  450527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:00.733777  450527 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:00.733795  450527 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:00.733851  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:00.742787  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:00.742864  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:00.750923  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:00.758997  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:00.759061  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:00.766673  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:00.774800  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:00.774867  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:00.782876  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:00.790575  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:00.790668  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:00.798035  450527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:00.841253  450527 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:31:00.841588  450527 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:31:00.863828  450527 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:31:00.863905  450527 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:31:00.863945  450527 kubeadm.go:318] OS: Linux
	I1009 19:31:00.863995  450527 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:31:00.864047  450527 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:31:00.864099  450527 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:31:00.864150  450527 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:31:00.864201  450527 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:31:00.864262  450527 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:31:00.864312  450527 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:31:00.864364  450527 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:31:00.864414  450527 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:31:00.937511  450527 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:31:00.937631  450527 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:31:00.937734  450527 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:31:00.944863  450527 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:31:00.951916  450527 out.go:252]   - Generating certificates and keys ...
	I1009 19:31:00.952004  450527 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:31:00.952081  450527 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:31:00.952170  450527 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:31:00.952234  450527 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:31:00.952308  450527 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:31:00.952364  450527 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:31:00.952430  450527 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:31:00.952495  450527 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:31:00.952573  450527 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:31:00.952649  450527 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:31:00.952689  450527 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:31:00.952747  450527 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:31:01.345352  450527 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:31:01.885511  450527 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:31:02.517929  450527 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:31:02.799730  450527 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:31:03.191447  450527 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:31:03.192263  450527 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:31:03.195392  450527 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:31:03.198714  450527 out.go:252]   - Booting up control plane ...
	I1009 19:31:03.198826  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:31:03.198909  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:31:03.200033  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:31:03.216240  450527 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:31:03.216576  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:31:03.225184  450527 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:31:03.225626  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:31:03.225677  450527 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:31:03.375408  450527 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:31:03.375534  450527 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:31:04.376535  450527 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001381597s
	I1009 19:31:04.380285  450527 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:31:04.380389  450527 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:31:04.380488  450527 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:31:04.380575  450527 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:33:03.520031  442831 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	I1009 19:33:03.520259  442831 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	I1009 19:33:03.520678  442831 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	I1009 19:33:03.520697  442831 kubeadm.go:318] 
	I1009 19:33:03.520792  442831 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:33:03.520877  442831 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:33:03.520975  442831 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:33:03.521074  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:33:03.521152  442831 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:33:03.521239  442831 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:33:03.521244  442831 kubeadm.go:318] 
	I1009 19:33:03.524674  442831 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:33:03.524961  442831 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:33:03.525098  442831 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:33:03.525730  442831 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:33:03.525801  442831 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:33:03.525863  442831 kubeadm.go:402] duration metric: took 8m15.912542815s to StartCluster
	I1009 19:33:03.525898  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:33:03.525960  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:33:03.551174  442831 cri.go:89] found id: ""
	I1009 19:33:03.551207  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.551216  442831 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:33:03.551223  442831 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:33:03.551282  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:33:03.576975  442831 cri.go:89] found id: ""
	I1009 19:33:03.577002  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.577012  442831 logs.go:284] No container was found matching "etcd"
	I1009 19:33:03.577019  442831 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:33:03.577082  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:33:03.604738  442831 cri.go:89] found id: ""
	I1009 19:33:03.604761  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.604770  442831 logs.go:284] No container was found matching "coredns"
	I1009 19:33:03.604776  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:33:03.604835  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:33:03.631189  442831 cri.go:89] found id: ""
	I1009 19:33:03.631215  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.631223  442831 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:33:03.631231  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:33:03.631295  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:33:03.657842  442831 cri.go:89] found id: ""
	I1009 19:33:03.657871  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.657893  442831 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:33:03.657900  442831 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:33:03.657963  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:33:03.683214  442831 cri.go:89] found id: ""
	I1009 19:33:03.683240  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.683248  442831 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:33:03.683255  442831 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:33:03.683317  442831 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:33:03.710197  442831 cri.go:89] found id: ""
	I1009 19:33:03.710225  442831 logs.go:282] 0 containers: []
	W1009 19:33:03.710233  442831 logs.go:284] No container was found matching "kindnet"
	I1009 19:33:03.710243  442831 logs.go:123] Gathering logs for kubelet ...
	I1009 19:33:03.710254  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:33:03.799584  442831 logs.go:123] Gathering logs for dmesg ...
	I1009 19:33:03.799624  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:33:03.816616  442831 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:33:03.816742  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:33:03.884133  442831 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:33:03.874860    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.875488    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877118    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877640    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.879842    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:33:03.874860    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.875488    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877118    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.877640    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:03.879842    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:33:03.884159  442831 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:33:03.884172  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:33:03.958432  442831 logs.go:123] Gathering logs for container status ...
	I1009 19:33:03.958470  442831 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 19:33:03.986741  442831 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:33:03.986802  442831 out.go:285] * 
	W1009 19:33:03.986885  442831 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:33:03.986933  442831 out.go:285] * 
	W1009 19:33:03.989100  442831 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:33:03.998288  442831 out.go:203] 
	W1009 19:33:04.002574  442831 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501588605s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000950008s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00122088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001310063s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:33:04.002606  442831 out.go:285] * 
	I1009 19:33:04.007524  442831 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:32:56 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:56.30756061Z" level=info msg="createCtr: removing container 81c578e3fefb06d8ff02b2de43b37bcf9949b89713f7c9b8be23904b0316ab96" id=b671f310-a5fd-4765-bc23-433d18b1bc90 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:56 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:56.307598863Z" level=info msg="createCtr: deleting container 81c578e3fefb06d8ff02b2de43b37bcf9949b89713f7c9b8be23904b0316ab96 from storage" id=b671f310-a5fd-4765-bc23-433d18b1bc90 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:56 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:56.311266335Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-476949_kube-system_b2854ed80bc52ca3ba1eb9fbe85183d7_0" id=b671f310-a5fd-4765-bc23-433d18b1bc90 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.289146342Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ccc621b7-f1e1-4d59-9c99-2d0cc5f67a26 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.292033373Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7801a5da-325a-4ba9-8074-f6c655170619 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.293030786Z" level=info msg="Creating container: kube-system/etcd-force-systemd-flag-476949/etcd" id=f6691864-abbc-40f6-a6b2-0fba1da4388c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.293292902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.297713197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.298351023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.308572983Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f6691864-abbc-40f6-a6b2-0fba1da4388c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.31004145Z" level=info msg="createCtr: deleting container ID 4049ae34f50f5755ed5f84628d4556c70128b272210483d2baa3cd77affcbc80 from idIndex" id=f6691864-abbc-40f6-a6b2-0fba1da4388c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.310267546Z" level=info msg="createCtr: removing container 4049ae34f50f5755ed5f84628d4556c70128b272210483d2baa3cd77affcbc80" id=f6691864-abbc-40f6-a6b2-0fba1da4388c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.310317827Z" level=info msg="createCtr: deleting container 4049ae34f50f5755ed5f84628d4556c70128b272210483d2baa3cd77affcbc80 from storage" id=f6691864-abbc-40f6-a6b2-0fba1da4388c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:32:58 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:32:58.313091503Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-476949_kube-system_c5fab9c601835f73729d4cfdf5645951_0" id=f6691864-abbc-40f6-a6b2-0fba1da4388c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.288770966Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=884403e9-3732-45ae-9522-30d46cc8fb73 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.289638933Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a554e91b-95f4-49c5-bb23-49a3d04aa1e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.290539803Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-flag-476949/kube-scheduler" id=13ea0f0f-2fd9-4d25-a31d-8bf34938efbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.290777509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.298539184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.29913449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.314783446Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=13ea0f0f-2fd9-4d25-a31d-8bf34938efbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.316219749Z" level=info msg="createCtr: deleting container ID 32ee62e8f0522423bff6428fe1655cef6f216542e19801fb5922767a5caff915 from idIndex" id=13ea0f0f-2fd9-4d25-a31d-8bf34938efbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.31626418Z" level=info msg="createCtr: removing container 32ee62e8f0522423bff6428fe1655cef6f216542e19801fb5922767a5caff915" id=13ea0f0f-2fd9-4d25-a31d-8bf34938efbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.316302564Z" level=info msg="createCtr: deleting container 32ee62e8f0522423bff6428fe1655cef6f216542e19801fb5922767a5caff915 from storage" id=13ea0f0f-2fd9-4d25-a31d-8bf34938efbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:33:03 force-systemd-flag-476949 crio[844]: time="2025-10-09T19:33:03.324427157Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-476949_kube-system_3f6c70ae8581c4e6d0db3a101d07d7e0_0" id=13ea0f0f-2fd9-4d25-a31d-8bf34938efbe name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:33:05.349977    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:05.350598    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:05.352203    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:05.352752    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:33:05.354422    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 18:57] overlayfs: idmapped layers are currently not supported
	[  +4.128207] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:33:05 up  2:15,  0 user,  load average: 0.10, 0.89, 1.68
	Linux force-systemd-flag-476949 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:32:56 force-systemd-flag-476949 kubelet[1789]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-476949_kube-system(b2854ed80bc52ca3ba1eb9fbe85183d7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:32:56 force-systemd-flag-476949 kubelet[1789]:  > logger="UnhandledError"
	Oct 09 19:32:56 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:56.311943    1789 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-476949" podUID="b2854ed80bc52ca3ba1eb9fbe85183d7"
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:58.288651    1789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-476949\" not found" node="force-systemd-flag-476949"
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:58.314370    1789 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]:  > podSandboxID="f42578cc7a7d4ed9fac662c78876aefac4be17f596dc2bacbedcf6b3190d9ca3"
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:58.314499    1789 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]:         container etcd start failed in pod etcd-force-systemd-flag-476949_kube-system(c5fab9c601835f73729d4cfdf5645951): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]:  > logger="UnhandledError"
	Oct 09 19:32:58 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:58.314531    1789 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-476949" podUID="c5fab9c601835f73729d4cfdf5645951"
	Oct 09 19:32:59 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:59.806598    1789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-476949.186ce95dc3a3e024  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-476949,UID:force-systemd-flag-476949,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-476949 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-476949,},FirstTimestamp:2025-10-09 19:29:03.318548516 +0000 UTC m=+1.306990783,LastTimestamp:2025-10-09 19:29:03.318548516 +0000 UTC m=+1.306990783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-476949,}"
	Oct 09 19:32:59 force-systemd-flag-476949 kubelet[1789]: E1009 19:32:59.918753    1789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-476949?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:33:00 force-systemd-flag-476949 kubelet[1789]: I1009 19:33:00.118975    1789 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-476949"
	Oct 09 19:33:00 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:00.119450    1789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-flag-476949"
	Oct 09 19:33:01 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:01.159819    1789 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:03.288354    1789 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-476949\" not found" node="force-systemd-flag-476949"
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:03.324850    1789 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]:  > podSandboxID="0c3c37a8408011cb09248b6138bfacd46b2e6e4194747358668f2fe024484c6c"
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:03.324967    1789 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-476949_kube-system(3f6c70ae8581c4e6d0db3a101d07d7e0): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]:  > logger="UnhandledError"
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:03.325009    1789 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-476949" podUID="3f6c70ae8581c4e6d0db3a101d07d7e0"
	Oct 09 19:33:03 force-systemd-flag-476949 kubelet[1789]: E1009 19:33:03.352106    1789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-476949\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-476949 -n force-systemd-flag-476949
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-476949 -n force-systemd-flag-476949: exit status 6 (327.085906ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:33:05.813039  454494 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-476949" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-476949" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-476949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-476949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-476949: (1.901251537s)
--- FAIL: TestForceSystemdFlag (518.13s)

                                                
                                    
x
+
TestForceSystemdEnv (510.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1009 19:27:37.154367  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:31:14.049444  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:32:37.155366  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m27.387751125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-028248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-028248" primary control-plane node in "force-systemd-env-028248" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:26:37.554291  450527 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:26:37.555123  450527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:37.555139  450527 out.go:374] Setting ErrFile to fd 2...
	I1009 19:26:37.555146  450527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:37.555447  450527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:26:37.555892  450527 out.go:368] Setting JSON to false
	I1009 19:26:37.556776  450527 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7749,"bootTime":1760030249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:26:37.556847  450527 start.go:141] virtualization:  
	I1009 19:26:37.560242  450527 out.go:179] * [force-systemd-env-028248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:26:37.564280  450527 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:26:37.564463  450527 notify.go:220] Checking for updates...
	I1009 19:26:37.568524  450527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:26:37.571531  450527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:26:37.574339  450527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:26:37.577143  450527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:26:37.580114  450527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 19:26:37.583735  450527 config.go:182] Loaded profile config "force-systemd-flag-476949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:37.583845  450527 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:26:37.617450  450527 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:26:37.617645  450527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:37.679391  450527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:26:37.670570324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:37.679495  450527 docker.go:318] overlay module found
	I1009 19:26:37.682765  450527 out.go:179] * Using the docker driver based on user configuration
	I1009 19:26:37.685610  450527 start.go:305] selected driver: docker
	I1009 19:26:37.685636  450527 start.go:925] validating driver "docker" against <nil>
	I1009 19:26:37.685652  450527 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:26:37.686409  450527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:37.742756  450527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:26:37.733992771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:37.742932  450527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:26:37.743160  450527 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:26:37.746179  450527 out.go:179] * Using Docker driver with root privileges
	I1009 19:26:37.749031  450527 cni.go:84] Creating CNI manager for ""
	I1009 19:26:37.749114  450527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:26:37.749127  450527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:26:37.749210  450527 start.go:349] cluster config:
	{Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:26:37.752350  450527 out.go:179] * Starting "force-systemd-env-028248" primary control-plane node in "force-systemd-env-028248" cluster
	I1009 19:26:37.755361  450527 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:26:37.758263  450527 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:26:37.761111  450527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:37.761165  450527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:26:37.761178  450527 cache.go:64] Caching tarball of preloaded images
	I1009 19:26:37.761191  450527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:26:37.761261  450527 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:26:37.761272  450527 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:26:37.761380  450527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/config.json ...
	I1009 19:26:37.761398  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/config.json: {Name:mk3d04a15b3ddf3f3f99830bc4f72da6874e6a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:37.780791  450527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:26:37.780817  450527 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:26:37.780836  450527 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:26:37.780868  450527 start.go:360] acquireMachinesLock for force-systemd-env-028248: {Name:mkc6e3924168d990b2ddb75c42f0bb8c550df681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:26:37.781002  450527 start.go:364] duration metric: took 104.822µs to acquireMachinesLock for "force-systemd-env-028248"
	I1009 19:26:37.781040  450527 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:26:37.781118  450527 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:26:37.784545  450527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:26:37.784780  450527 start.go:159] libmachine.API.Create for "force-systemd-env-028248" (driver="docker")
	I1009 19:26:37.784827  450527 client.go:168] LocalClient.Create starting
	I1009 19:26:37.784908  450527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:26:37.784948  450527 main.go:141] libmachine: Decoding PEM data...
	I1009 19:26:37.784964  450527 main.go:141] libmachine: Parsing certificate...
	I1009 19:26:37.785017  450527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:26:37.785038  450527 main.go:141] libmachine: Decoding PEM data...
	I1009 19:26:37.785054  450527 main.go:141] libmachine: Parsing certificate...
	I1009 19:26:37.785427  450527 cli_runner.go:164] Run: docker network inspect force-systemd-env-028248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:26:37.801751  450527 cli_runner.go:211] docker network inspect force-systemd-env-028248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:26:37.801836  450527 network_create.go:284] running [docker network inspect force-systemd-env-028248] to gather additional debugging logs...
	I1009 19:26:37.801859  450527 cli_runner.go:164] Run: docker network inspect force-systemd-env-028248
	W1009 19:26:37.818696  450527 cli_runner.go:211] docker network inspect force-systemd-env-028248 returned with exit code 1
	I1009 19:26:37.818735  450527 network_create.go:287] error running [docker network inspect force-systemd-env-028248]: docker network inspect force-systemd-env-028248: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-028248 not found
	I1009 19:26:37.818751  450527 network_create.go:289] output of [docker network inspect force-systemd-env-028248]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-028248 not found
	
	** /stderr **
	I1009 19:26:37.818848  450527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:26:37.834339  450527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:26:37.834694  450527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:26:37.834918  450527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:26:37.835190  450527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ec14a7a0bd9d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:10:66:c1:5a:8a} reservation:<nil>}
	I1009 19:26:37.835609  450527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a169a0}
	I1009 19:26:37.835631  450527 network_create.go:124] attempt to create docker network force-systemd-env-028248 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 19:26:37.835686  450527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-028248 force-systemd-env-028248
	I1009 19:26:37.894341  450527 network_create.go:108] docker network force-systemd-env-028248 192.168.85.0/24 created
	I1009 19:26:37.894376  450527 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-028248" container
	I1009 19:26:37.894466  450527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:26:37.910480  450527 cli_runner.go:164] Run: docker volume create force-systemd-env-028248 --label name.minikube.sigs.k8s.io=force-systemd-env-028248 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:26:37.929231  450527 oci.go:103] Successfully created a docker volume force-systemd-env-028248
	I1009 19:26:37.929323  450527 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-028248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-028248 --entrypoint /usr/bin/test -v force-systemd-env-028248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:26:38.495704  450527 oci.go:107] Successfully prepared a docker volume force-systemd-env-028248
	I1009 19:26:38.495768  450527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:38.495778  450527 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:26:38.495856  450527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-028248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:26:42.977608  450527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-028248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.481690472s)
	I1009 19:26:42.977646  450527 kic.go:203] duration metric: took 4.481864398s to extract preloaded images to volume ...
	W1009 19:26:42.977820  450527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:26:42.977944  450527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:26:43.033558  450527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-028248 --name force-systemd-env-028248 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-028248 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-028248 --network force-systemd-env-028248 --ip 192.168.85.2 --volume force-systemd-env-028248:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:26:43.339593  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Running}}
	I1009 19:26:43.364896  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Status}}
	I1009 19:26:43.390310  450527 cli_runner.go:164] Run: docker exec force-systemd-env-028248 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:26:43.441929  450527 oci.go:144] the created container "force-systemd-env-028248" has a running status.
	I1009 19:26:43.441969  450527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa...
	I1009 19:26:43.926328  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:26:43.926377  450527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:26:43.945839  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Status}}
	I1009 19:26:43.963340  450527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:26:43.963364  450527 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-028248 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:26:44.007712  450527 cli_runner.go:164] Run: docker container inspect force-systemd-env-028248 --format={{.State.Status}}
	I1009 19:26:44.027190  450527 machine.go:93] provisionDockerMachine start ...
	I1009 19:26:44.027305  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:44.044790  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:44.045165  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:44.045182  450527 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:26:44.045851  450527 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:26:47.193883  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-028248
	
	I1009 19:26:47.193912  450527 ubuntu.go:182] provisioning hostname "force-systemd-env-028248"
	I1009 19:26:47.193976  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:47.212136  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:47.212453  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:47.212473  450527 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-028248 && echo "force-systemd-env-028248" | sudo tee /etc/hostname
	I1009 19:26:47.368223  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-028248
	
	I1009 19:26:47.368429  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:47.387661  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:47.387969  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:47.387991  450527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-028248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-028248/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-028248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:26:47.530815  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:26:47.530840  450527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:26:47.530873  450527 ubuntu.go:190] setting up certificates
	I1009 19:26:47.530882  450527 provision.go:84] configureAuth start
	I1009 19:26:47.530944  450527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-028248
	I1009 19:26:47.551120  450527 provision.go:143] copyHostCerts
	I1009 19:26:47.551168  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:26:47.551210  450527 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:26:47.551223  450527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:26:47.551313  450527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:26:47.551406  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:26:47.551430  450527 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:26:47.551438  450527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:26:47.551471  450527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:26:47.551519  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:26:47.551542  450527 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:26:47.551546  450527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:26:47.551579  450527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:26:47.551635  450527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-028248 san=[127.0.0.1 192.168.85.2 force-systemd-env-028248 localhost minikube]
	I1009 19:26:48.095152  450527 provision.go:177] copyRemoteCerts
	I1009 19:26:48.095223  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:26:48.095277  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.113838  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.213819  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:26:48.213877  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1009 19:26:48.231592  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:26:48.231657  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:26:48.249837  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:26:48.249899  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:26:48.268546  450527 provision.go:87] duration metric: took 737.649667ms to configureAuth
	I1009 19:26:48.268630  450527 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:26:48.268831  450527 config.go:182] Loaded profile config "force-systemd-env-028248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:48.268979  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.287143  450527 main.go:141] libmachine: Using SSH client type: native
	I1009 19:26:48.287468  450527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1009 19:26:48.287489  450527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:26:48.538110  450527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:26:48.538157  450527 machine.go:96] duration metric: took 4.510942055s to provisionDockerMachine
	I1009 19:26:48.538169  450527 client.go:171] duration metric: took 10.753330667s to LocalClient.Create
	I1009 19:26:48.538184  450527 start.go:167] duration metric: took 10.753404998s to libmachine.API.Create "force-systemd-env-028248"
	I1009 19:26:48.538196  450527 start.go:293] postStartSetup for "force-systemd-env-028248" (driver="docker")
	I1009 19:26:48.538207  450527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:26:48.538292  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:26:48.538339  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.560800  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.662276  450527 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:26:48.665814  450527 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:26:48.665841  450527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:26:48.665852  450527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:26:48.665914  450527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:26:48.666001  450527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:26:48.666012  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /etc/ssl/certs/2863092.pem
	I1009 19:26:48.666112  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:26:48.673594  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:26:48.691210  450527 start.go:296] duration metric: took 152.998698ms for postStartSetup
	I1009 19:26:48.691627  450527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-028248
	I1009 19:26:48.708521  450527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/config.json ...
	I1009 19:26:48.708809  450527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:26:48.708865  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.725261  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.823363  450527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:26:48.828331  450527 start.go:128] duration metric: took 11.047198726s to createHost
	I1009 19:26:48.828355  450527 start.go:83] releasing machines lock for "force-systemd-env-028248", held for 11.047337099s
	I1009 19:26:48.828428  450527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-028248
	I1009 19:26:48.847297  450527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:26:48.847377  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.847590  450527 ssh_runner.go:195] Run: cat /version.json
	I1009 19:26:48.847636  450527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-028248
	I1009 19:26:48.874211  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:48.875497  450527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/force-systemd-env-028248/id_rsa Username:docker}
	I1009 19:26:49.066607  450527 ssh_runner.go:195] Run: systemctl --version
	I1009 19:26:49.073179  450527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:26:49.108992  450527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:26:49.113399  450527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:26:49.113468  450527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:26:49.142838  450527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:26:49.142861  450527 start.go:495] detecting cgroup driver to use...
	I1009 19:26:49.142878  450527 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1009 19:26:49.142929  450527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:26:49.160671  450527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:26:49.173632  450527 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:26:49.173719  450527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:26:49.192520  450527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:26:49.212480  450527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:26:49.338187  450527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:26:49.468682  450527 docker.go:234] disabling docker service ...
	I1009 19:26:49.468756  450527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:26:49.491683  450527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:26:49.505251  450527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:26:49.622419  450527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:26:49.737383  450527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:26:49.750310  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:26:49.764859  450527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:26:49.764948  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.774045  450527 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:26:49.774200  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.783606  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.793061  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.802423  450527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:26:49.810734  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.819969  450527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.833867  450527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:49.842782  450527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:26:49.850426  450527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:26:49.858083  450527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:49.981982  450527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:26:50.111761  450527 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:26:50.111885  450527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:26:50.116517  450527 start.go:563] Will wait 60s for crictl version
	I1009 19:26:50.116610  450527 ssh_runner.go:195] Run: which crictl
	I1009 19:26:50.120857  450527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:26:50.146752  450527 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:26:50.146881  450527 ssh_runner.go:195] Run: crio --version
	I1009 19:26:50.174603  450527 ssh_runner.go:195] Run: crio --version
	I1009 19:26:50.208798  450527 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:26:50.211745  450527 cli_runner.go:164] Run: docker network inspect force-systemd-env-028248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:26:50.228270  450527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:26:50.232176  450527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:26:50.241954  450527 kubeadm.go:883] updating cluster {Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:26:50.242063  450527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:50.242182  450527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:50.279629  450527 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:50.279653  450527 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:26:50.279708  450527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:50.304560  450527 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:50.304585  450527 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:26:50.304594  450527 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:26:50.304691  450527 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-028248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:26:50.304773  450527 ssh_runner.go:195] Run: crio config
	I1009 19:26:50.374777  450527 cni.go:84] Creating CNI manager for ""
	I1009 19:26:50.374802  450527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:26:50.374822  450527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:26:50.374846  450527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-028248 NodeName:force-systemd-env-028248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:26:50.374986  450527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-028248"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:26:50.375067  450527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:26:50.382997  450527 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:26:50.383077  450527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:26:50.390612  450527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1009 19:26:50.403006  450527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:26:50.415916  450527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1009 19:26:50.428814  450527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:26:50.432299  450527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:26:50.442196  450527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:50.571246  450527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:26:50.587464  450527 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248 for IP: 192.168.85.2
	I1009 19:26:50.587527  450527 certs.go:195] generating shared ca certs ...
	I1009 19:26:50.587574  450527 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:50.587748  450527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:26:50.587834  450527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:26:50.587866  450527 certs.go:257] generating profile certs ...
	I1009 19:26:50.587952  450527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.key
	I1009 19:26:50.587999  450527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.crt with IP's: []
	I1009 19:26:50.922816  450527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.crt ...
	I1009 19:26:50.922854  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.crt: {Name:mk27bba1e7650d93ff22d3cf6b06c6a6b1eb51cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:50.923085  450527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.key ...
	I1009 19:26:50.923112  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/client.key: {Name:mk4bc4be0d798e53acc6c7c190fd8c0541b2a659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:50.923215  450527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f
	I1009 19:26:50.923236  450527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 19:26:51.427955  450527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f ...
	I1009 19:26:51.428035  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f: {Name:mkf1356060b85528b15445892fe19bd981ecb30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.428295  450527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f ...
	I1009 19:26:51.428333  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f: {Name:mkd3a9a265e5e7c39edf51fb3023d6b98cb5961b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.428483  450527 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt.ebe5a00f -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt
	I1009 19:26:51.428622  450527 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key.ebe5a00f -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key
	I1009 19:26:51.428727  450527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key
	I1009 19:26:51.428765  450527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt with IP's: []
	I1009 19:26:51.745104  450527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt ...
	I1009 19:26:51.745140  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt: {Name:mk5fe89f12790d13535a3c4a72ed796147893e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.745349  450527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key ...
	I1009 19:26:51.745364  450527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key: {Name:mk5fee4b0f54ae38f23f3e4b5f3c465c98c0d811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:51.745456  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:26:51.745476  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:26:51.745489  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:26:51.745505  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:26:51.745522  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:26:51.745540  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:26:51.745557  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:26:51.745569  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:26:51.745622  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:26:51.745672  450527 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:26:51.745685  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:26:51.745711  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:26:51.745740  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:26:51.745765  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:26:51.745810  450527 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:26:51.745841  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem -> /usr/share/ca-certificates/286309.pem
	I1009 19:26:51.745856  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /usr/share/ca-certificates/2863092.pem
	I1009 19:26:51.745867  450527 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:51.746475  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:26:51.766059  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:26:51.784246  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:26:51.802855  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:26:51.820360  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:26:51.838780  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:26:51.857511  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:26:51.875251  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/force-systemd-env-028248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:26:51.892503  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:26:51.910041  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:26:51.927462  450527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:26:51.945224  450527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:26:51.957972  450527 ssh_runner.go:195] Run: openssl version
	I1009 19:26:51.964148  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:26:51.972266  450527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:26:51.975958  450527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:26:51.976025  450527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:26:52.022410  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:26:52.031254  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:26:52.039889  450527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:26:52.044142  450527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:26:52.044267  450527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:26:52.085930  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:26:52.094485  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:26:52.103301  450527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:52.107357  450527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:52.107475  450527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:52.148781  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:26:52.157041  450527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:26:52.161204  450527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:26:52.161256  450527 kubeadm.go:400] StartCluster: {Name:force-systemd-env-028248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-028248 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:26:52.161335  450527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:26:52.161401  450527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:26:52.190935  450527 cri.go:89] found id: ""
	I1009 19:26:52.191078  450527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:26:52.199436  450527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:26:52.207508  450527 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:26:52.207652  450527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:26:52.216274  450527 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:26:52.216294  450527 kubeadm.go:157] found existing configuration files:
	
	I1009 19:26:52.216371  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:26:52.224136  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:26:52.224210  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:26:52.231689  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:26:52.239802  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:26:52.239918  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:26:52.247856  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:26:52.255902  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:26:52.255973  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:26:52.263733  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:26:52.271816  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:26:52.271891  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:26:52.279760  450527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:26:52.320038  450527 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:26:52.320234  450527 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:26:52.343384  450527 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:26:52.343463  450527 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:26:52.343506  450527 kubeadm.go:318] OS: Linux
	I1009 19:26:52.343558  450527 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:26:52.343613  450527 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:26:52.343665  450527 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:26:52.343719  450527 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:26:52.343776  450527 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:26:52.343832  450527 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:26:52.343883  450527 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:26:52.343942  450527 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:26:52.343994  450527 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:26:52.415057  450527 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:26:52.415182  450527 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:26:52.415333  450527 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:26:52.422741  450527 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:26:52.429708  450527 out.go:252]   - Generating certificates and keys ...
	I1009 19:26:52.429896  450527 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:26:52.430004  450527 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:26:52.688705  450527 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:26:52.898446  450527 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:26:53.349449  450527 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:26:53.685555  450527 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:26:54.850227  450527 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:26:54.850515  450527 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:26:55.394073  450527 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:26:55.394423  450527 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:26:55.475717  450527 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:26:55.802856  450527 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:26:55.869656  450527 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:26:55.870171  450527 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:26:56.439916  450527 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:26:56.951401  450527 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:26:57.243854  450527 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:26:57.378782  450527 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:26:57.827801  450527 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:26:57.828562  450527 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:26:57.831451  450527 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:26:57.834949  450527 out.go:252]   - Booting up control plane ...
	I1009 19:26:57.835062  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:26:57.835143  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:26:57.835757  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:26:57.853964  450527 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:26:57.854079  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:26:57.863285  450527 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:26:57.864133  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:26:57.865313  450527 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:26:58.010687  450527 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:26:58.010817  450527 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:00.024159  450527 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.014833697s
	I1009 19:27:00.028542  450527 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:00.029343  450527 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:27:00.029766  450527 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:00.030172  450527 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:31:00.030731  450527 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000046632s
	I1009 19:31:00.030843  450527 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000028121s
	I1009 19:31:00.031809  450527 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000940717s
	I1009 19:31:00.031831  450527 kubeadm.go:318] 
	I1009 19:31:00.031927  450527 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:31:00.032014  450527 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:31:00.032111  450527 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:31:00.032315  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:31:00.032398  450527 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:31:00.032481  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:31:00.032486  450527 kubeadm.go:318] 
	I1009 19:31:00.039044  450527 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:31:00.039299  450527 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:31:00.039412  450527 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:31:00.048645  450527 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1009 19:31:00.048749  450527 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 19:31:00.048955  450527 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.014833697s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000046632s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000028121s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000940717s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-028248 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.014833697s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000046632s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000028121s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000940717s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:31:00.049049  450527 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:31:00.709711  450527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:31:00.723247  450527 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:31:00.723308  450527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:31:00.733777  450527 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:31:00.733795  450527 kubeadm.go:157] found existing configuration files:
	
	I1009 19:31:00.733851  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:31:00.742787  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:31:00.742864  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:31:00.750923  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:31:00.758997  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:31:00.759061  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:31:00.766673  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:31:00.774800  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:31:00.774867  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:31:00.782876  450527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:31:00.790575  450527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:31:00.790668  450527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:31:00.798035  450527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:31:00.841253  450527 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:31:00.841588  450527 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:31:00.863828  450527 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:31:00.863905  450527 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:31:00.863945  450527 kubeadm.go:318] OS: Linux
	I1009 19:31:00.863995  450527 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:31:00.864047  450527 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:31:00.864099  450527 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:31:00.864150  450527 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:31:00.864201  450527 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:31:00.864262  450527 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:31:00.864312  450527 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:31:00.864364  450527 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:31:00.864414  450527 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:31:00.937511  450527 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:31:00.937631  450527 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:31:00.937734  450527 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:31:00.944863  450527 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:31:00.951916  450527 out.go:252]   - Generating certificates and keys ...
	I1009 19:31:00.952004  450527 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:31:00.952081  450527 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:31:00.952170  450527 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:31:00.952234  450527 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:31:00.952308  450527 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:31:00.952364  450527 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:31:00.952430  450527 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:31:00.952495  450527 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:31:00.952573  450527 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:31:00.952649  450527 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:31:00.952689  450527 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:31:00.952747  450527 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:31:01.345352  450527 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:31:01.885511  450527 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:31:02.517929  450527 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:31:02.799730  450527 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:31:03.191447  450527 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:31:03.192263  450527 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:31:03.195392  450527 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:31:03.198714  450527 out.go:252]   - Booting up control plane ...
	I1009 19:31:03.198826  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:31:03.198909  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:31:03.200033  450527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:31:03.216240  450527 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:31:03.216576  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:31:03.225184  450527 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:31:03.225626  450527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:31:03.225677  450527 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:31:03.375408  450527 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:31:03.375534  450527 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:31:04.376535  450527 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001381597s
	I1009 19:31:04.380285  450527 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:31:04.380389  450527 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:31:04.380488  450527 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:31:04.380575  450527 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:35:04.380785  450527 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	I1009 19:35:04.383287  450527 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	I1009 19:35:04.386549  450527 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	I1009 19:35:04.386570  450527 kubeadm.go:318] 
	I1009 19:35:04.386666  450527 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:35:04.386753  450527 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:35:04.386844  450527 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:35:04.386942  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:35:04.387030  450527 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:35:04.387118  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:35:04.387123  450527 kubeadm.go:318] 
	I1009 19:35:04.390876  450527 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:35:04.391125  450527 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:35:04.391243  450527 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:35:04.391883  450527 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:35:04.391963  450527 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:35:04.392024  450527 kubeadm.go:402] duration metric: took 8m12.230772173s to StartCluster
	I1009 19:35:04.392062  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:35:04.392130  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:35:04.426623  450527 cri.go:89] found id: ""
	I1009 19:35:04.426659  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.426669  450527 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:35:04.426676  450527 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:35:04.426739  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:35:04.452095  450527 cri.go:89] found id: ""
	I1009 19:35:04.452119  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.452128  450527 logs.go:284] No container was found matching "etcd"
	I1009 19:35:04.452135  450527 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:35:04.452201  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:35:04.477106  450527 cri.go:89] found id: ""
	I1009 19:35:04.477130  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.477143  450527 logs.go:284] No container was found matching "coredns"
	I1009 19:35:04.477150  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:35:04.477215  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:35:04.503218  450527 cri.go:89] found id: ""
	I1009 19:35:04.503242  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.503251  450527 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:35:04.503258  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:35:04.503318  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:35:04.528510  450527 cri.go:89] found id: ""
	I1009 19:35:04.528537  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.528546  450527 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:35:04.528552  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:35:04.528613  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:35:04.557283  450527 cri.go:89] found id: ""
	I1009 19:35:04.557353  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.557378  450527 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:35:04.557404  450527 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:35:04.557513  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:35:04.583331  450527 cri.go:89] found id: ""
	I1009 19:35:04.583405  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.583420  450527 logs.go:284] No container was found matching "kindnet"
	I1009 19:35:04.583431  450527 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:35:04.583443  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:35:04.663227  450527 logs.go:123] Gathering logs for container status ...
	I1009 19:35:04.663264  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:35:04.699153  450527 logs.go:123] Gathering logs for kubelet ...
	I1009 19:35:04.699181  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:35:04.785027  450527 logs.go:123] Gathering logs for dmesg ...
	I1009 19:35:04.785062  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:35:04.802377  450527 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:35:04.802416  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:35:04.874279  450527 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:35:04.865361    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.866360    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.867238    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.868722    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.869255    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:35:04.865361    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.866360    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.867238    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.868722    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.869255    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1009 19:35:04.874343  450527 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:35:04.874406  450527 out.go:285] * 
	* 
	W1009 19:35:04.874480  450527 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:35:04.874601  450527 out.go:285] * 
	* 
	W1009 19:35:04.877213  450527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:35:04.882700  450527 out.go:203] 
	W1009 19:35:04.885478  450527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:35:04.885508  450527 out.go:285] * 
	* 
	I1009 19:35:04.888670  450527 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-09 19:35:04.959408906 +0000 UTC m=+4088.770121613
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-028248
helpers_test.go:243: (dbg) docker inspect force-systemd-env-028248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97",
	        "Created": "2025-10-09T19:26:43.04917375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 450926,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:26:43.114445618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97/hostname",
	        "HostsPath": "/var/lib/docker/containers/673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97/hosts",
	        "LogPath": "/var/lib/docker/containers/673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97/673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97-json.log",
	        "Name": "/force-systemd-env-028248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-env-028248:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-028248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "673eaf8f1d05483db617637417c11815f54fb94f8e32565a4e7332ca6ab65a97",
	                "LowerDir": "/var/lib/docker/overlay2/b52ab41f616a5ee09cfdb294400f3cbde55058c0d958d9ce46e78b3fe668dfcd-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b52ab41f616a5ee09cfdb294400f3cbde55058c0d958d9ce46e78b3fe668dfcd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b52ab41f616a5ee09cfdb294400f3cbde55058c0d958d9ce46e78b3fe668dfcd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b52ab41f616a5ee09cfdb294400f3cbde55058c0d958d9ce46e78b3fe668dfcd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-028248",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-028248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-028248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-028248",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-028248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "16a80c7c2cbe23e202060b989694b179a59e9eac812c50d1d45730acf3e3dd3f",
	            "SandboxKey": "/var/run/docker/netns/16a80c7c2cbe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-028248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:8b:ee:b8:bb:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a33a79043c735cc395c94c819958fe63d805f07fcf0e0aff766edd4d261d84a",
	                    "EndpointID": "d7c78e0bb37265a6be528a2b28a3c31842980322e62e69749d5c50bf2226e808",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-028248",
	                        "673eaf8f1d05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-028248 -n force-systemd-env-028248
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-028248 -n force-systemd-env-028248: exit status 6 (341.659349ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:35:05.303027  457635 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-028248" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-028248 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-224541 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status docker --all --full --no-pager                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat docker --no-pager                                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/docker/daemon.json                                                          │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo docker system info                                                                   │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cri-dockerd --version                                                                │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat containerd --no-pager                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/containerd/config.toml                                                      │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo containerd config dump                                                               │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status crio --all --full --no-pager                                        │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat crio --no-pager                                                        │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo crio config                                                                          │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ delete  │ -p cilium-224541                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ start   │ -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ force-systemd-flag-476949 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-flag-476949                                                                               │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:33:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:33:07.771523  454875 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:33:07.771634  454875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:33:07.771638  454875 out.go:374] Setting ErrFile to fd 2...
	I1009 19:33:07.771642  454875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:33:07.772363  454875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:33:07.772844  454875 out.go:368] Setting JSON to false
	I1009 19:33:07.773755  454875 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8139,"bootTime":1760030249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:33:07.773819  454875 start.go:141] virtualization:  
	I1009 19:33:07.777636  454875 out.go:179] * [cert-expiration-259172] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:33:07.783163  454875 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:33:07.783267  454875 notify.go:220] Checking for updates...
	I1009 19:33:07.790464  454875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:33:07.793957  454875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:33:07.796976  454875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:33:07.801240  454875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:33:07.804381  454875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:33:07.808029  454875 config.go:182] Loaded profile config "force-systemd-env-028248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:33:07.808135  454875 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:33:07.831663  454875 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:33:07.831775  454875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:33:07.904011  454875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:33:07.894643685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:33:07.904127  454875 docker.go:318] overlay module found
	I1009 19:33:07.907269  454875 out.go:179] * Using the docker driver based on user configuration
	I1009 19:33:07.910021  454875 start.go:305] selected driver: docker
	I1009 19:33:07.910029  454875 start.go:925] validating driver "docker" against <nil>
	I1009 19:33:07.910042  454875 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:33:07.910822  454875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:33:07.964418  454875 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:33:07.955002035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:33:07.964554  454875 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:33:07.964807  454875 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:33:07.967726  454875 out.go:179] * Using Docker driver with root privileges
	I1009 19:33:07.970531  454875 cni.go:84] Creating CNI manager for ""
	I1009 19:33:07.970590  454875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:33:07.970598  454875 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:33:07.970673  454875 start.go:349] cluster config:
	{Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:33:07.973797  454875 out.go:179] * Starting "cert-expiration-259172" primary control-plane node in "cert-expiration-259172" cluster
	I1009 19:33:07.976563  454875 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:33:07.979421  454875 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:33:07.982338  454875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:33:07.982385  454875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:33:07.982393  454875 cache.go:64] Caching tarball of preloaded images
	I1009 19:33:07.982407  454875 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:33:07.982474  454875 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:33:07.982482  454875 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:33:07.982591  454875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/config.json ...
	I1009 19:33:07.982606  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/config.json: {Name:mk1792a03bebee231ddeefe5098dab461ee66c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:08.007033  454875 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:33:08.007048  454875 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:33:08.007064  454875 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:33:08.007089  454875 start.go:360] acquireMachinesLock for cert-expiration-259172: {Name:mk65f125499ece3a4312e2f3a76b34efae63b1d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:33:08.007219  454875 start.go:364] duration metric: took 115.094µs to acquireMachinesLock for "cert-expiration-259172"
	I1009 19:33:08.007247  454875 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:33:08.007331  454875 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:33:08.011094  454875 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:33:08.011390  454875 start.go:159] libmachine.API.Create for "cert-expiration-259172" (driver="docker")
	I1009 19:33:08.011433  454875 client.go:168] LocalClient.Create starting
	I1009 19:33:08.011535  454875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:33:08.011574  454875 main.go:141] libmachine: Decoding PEM data...
	I1009 19:33:08.011589  454875 main.go:141] libmachine: Parsing certificate...
	I1009 19:33:08.011650  454875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:33:08.011667  454875 main.go:141] libmachine: Decoding PEM data...
	I1009 19:33:08.011676  454875 main.go:141] libmachine: Parsing certificate...
	I1009 19:33:08.012054  454875 cli_runner.go:164] Run: docker network inspect cert-expiration-259172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:33:08.027919  454875 cli_runner.go:211] docker network inspect cert-expiration-259172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:33:08.028027  454875 network_create.go:284] running [docker network inspect cert-expiration-259172] to gather additional debugging logs...
	I1009 19:33:08.028042  454875 cli_runner.go:164] Run: docker network inspect cert-expiration-259172
	W1009 19:33:08.045955  454875 cli_runner.go:211] docker network inspect cert-expiration-259172 returned with exit code 1
	I1009 19:33:08.045975  454875 network_create.go:287] error running [docker network inspect cert-expiration-259172]: docker network inspect cert-expiration-259172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-259172 not found
	I1009 19:33:08.045999  454875 network_create.go:289] output of [docker network inspect cert-expiration-259172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-259172 not found
	
	** /stderr **
	I1009 19:33:08.046103  454875 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:33:08.064336  454875 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:33:08.064686  454875 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:33:08.064943  454875 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:33:08.065472  454875 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d7860}
	I1009 19:33:08.065489  454875 network_create.go:124] attempt to create docker network cert-expiration-259172 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 19:33:08.065553  454875 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-259172 cert-expiration-259172
	I1009 19:33:08.138416  454875 network_create.go:108] docker network cert-expiration-259172 192.168.76.0/24 created
	I1009 19:33:08.138438  454875 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-259172" container
	I1009 19:33:08.138518  454875 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:33:08.155394  454875 cli_runner.go:164] Run: docker volume create cert-expiration-259172 --label name.minikube.sigs.k8s.io=cert-expiration-259172 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:33:08.173320  454875 oci.go:103] Successfully created a docker volume cert-expiration-259172
	I1009 19:33:08.173399  454875 cli_runner.go:164] Run: docker run --rm --name cert-expiration-259172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-259172 --entrypoint /usr/bin/test -v cert-expiration-259172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:33:08.712683  454875 oci.go:107] Successfully prepared a docker volume cert-expiration-259172
	I1009 19:33:08.712739  454875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:33:08.712747  454875 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:33:08.712830  454875 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-259172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:33:13.141098  454875 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-259172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.428216863s)
	I1009 19:33:13.141126  454875 kic.go:203] duration metric: took 4.428372016s to extract preloaded images to volume ...
	W1009 19:33:13.141269  454875 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:33:13.141374  454875 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:33:13.194298  454875 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-259172 --name cert-expiration-259172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-259172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-259172 --network cert-expiration-259172 --ip 192.168.76.2 --volume cert-expiration-259172:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:33:13.492859  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Running}}
	I1009 19:33:13.513394  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:33:13.539413  454875 cli_runner.go:164] Run: docker exec cert-expiration-259172 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:33:13.589116  454875 oci.go:144] the created container "cert-expiration-259172" has a running status.
	I1009 19:33:13.589135  454875 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa...
	I1009 19:33:14.187437  454875 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:33:14.206553  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:33:14.227619  454875 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:33:14.227630  454875 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-259172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:33:14.267263  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:33:14.284840  454875 machine.go:93] provisionDockerMachine start ...
	I1009 19:33:14.284921  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:14.303335  454875 main.go:141] libmachine: Using SSH client type: native
	I1009 19:33:14.303662  454875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:33:14.303669  454875 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:33:14.304309  454875 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:33:17.449781  454875 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-259172
	
	I1009 19:33:17.449795  454875 ubuntu.go:182] provisioning hostname "cert-expiration-259172"
	I1009 19:33:17.449869  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:17.467667  454875 main.go:141] libmachine: Using SSH client type: native
	I1009 19:33:17.467968  454875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:33:17.467979  454875 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-259172 && echo "cert-expiration-259172" | sudo tee /etc/hostname
	I1009 19:33:17.623912  454875 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-259172
	
	I1009 19:33:17.623998  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:17.642939  454875 main.go:141] libmachine: Using SSH client type: native
	I1009 19:33:17.643237  454875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:33:17.643253  454875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-259172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-259172/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-259172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:33:17.786446  454875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:33:17.786463  454875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:33:17.786488  454875 ubuntu.go:190] setting up certificates
	I1009 19:33:17.786496  454875 provision.go:84] configureAuth start
	I1009 19:33:17.786556  454875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-259172
	I1009 19:33:17.804209  454875 provision.go:143] copyHostCerts
	I1009 19:33:17.804269  454875 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:33:17.804278  454875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:33:17.804357  454875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:33:17.804452  454875 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:33:17.804456  454875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:33:17.804482  454875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:33:17.804532  454875 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:33:17.804536  454875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:33:17.804556  454875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:33:17.804601  454875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-259172 san=[127.0.0.1 192.168.76.2 cert-expiration-259172 localhost minikube]
	I1009 19:33:18.688492  454875 provision.go:177] copyRemoteCerts
	I1009 19:33:18.688551  454875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:33:18.688599  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:18.705629  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:18.806313  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:33:18.823656  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:33:18.841545  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:33:18.858997  454875 provision.go:87] duration metric: took 1.072489558s to configureAuth
	I1009 19:33:18.859014  454875 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:33:18.859208  454875 config.go:182] Loaded profile config "cert-expiration-259172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:33:18.859306  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:18.877550  454875 main.go:141] libmachine: Using SSH client type: native
	I1009 19:33:18.877853  454875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:33:18.877866  454875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:33:19.135093  454875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:33:19.135109  454875 machine.go:96] duration metric: took 4.85025959s to provisionDockerMachine
	I1009 19:33:19.135118  454875 client.go:171] duration metric: took 11.12367952s to LocalClient.Create
	I1009 19:33:19.135130  454875 start.go:167] duration metric: took 11.123741969s to libmachine.API.Create "cert-expiration-259172"
	I1009 19:33:19.135136  454875 start.go:293] postStartSetup for "cert-expiration-259172" (driver="docker")
	I1009 19:33:19.135145  454875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:33:19.135216  454875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:33:19.135257  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:19.152581  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:19.262182  454875 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:33:19.265442  454875 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:33:19.265465  454875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:33:19.265475  454875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:33:19.265531  454875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:33:19.265632  454875 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:33:19.265738  454875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:33:19.273564  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:33:19.291106  454875 start.go:296] duration metric: took 155.956149ms for postStartSetup
	I1009 19:33:19.291470  454875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-259172
	I1009 19:33:19.307751  454875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/config.json ...
	I1009 19:33:19.308025  454875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:33:19.308064  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:19.324071  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:19.423330  454875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:33:19.428161  454875 start.go:128] duration metric: took 11.420814192s to createHost
	I1009 19:33:19.428177  454875 start.go:83] releasing machines lock for "cert-expiration-259172", held for 11.42095072s
	I1009 19:33:19.428263  454875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-259172
	I1009 19:33:19.445922  454875 ssh_runner.go:195] Run: cat /version.json
	I1009 19:33:19.445973  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:19.446206  454875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:33:19.446263  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:19.465691  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:19.476360  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:19.565986  454875 ssh_runner.go:195] Run: systemctl --version
	I1009 19:33:19.661290  454875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:33:19.704410  454875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:33:19.708752  454875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:33:19.708815  454875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:33:19.736466  454875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:33:19.736480  454875 start.go:495] detecting cgroup driver to use...
	I1009 19:33:19.736511  454875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:33:19.736569  454875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:33:19.754982  454875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:33:19.767628  454875 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:33:19.767681  454875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:33:19.785936  454875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:33:19.805107  454875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:33:19.927668  454875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:33:20.061419  454875 docker.go:234] disabling docker service ...
	I1009 19:33:20.061498  454875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:33:20.085468  454875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:33:20.103508  454875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:33:20.227201  454875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:33:20.351481  454875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:33:20.365437  454875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:33:20.379841  454875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:33:20.379908  454875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.388646  454875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:33:20.388705  454875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.397575  454875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.406106  454875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.414947  454875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:33:20.423100  454875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.431846  454875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.447509  454875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:33:20.457511  454875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:33:20.465283  454875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:33:20.473121  454875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:33:20.594331  454875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:33:20.729013  454875 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:33:20.729075  454875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:33:20.732836  454875 start.go:563] Will wait 60s for crictl version
	I1009 19:33:20.732894  454875 ssh_runner.go:195] Run: which crictl
	I1009 19:33:20.736288  454875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:33:20.760064  454875 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:33:20.760146  454875 ssh_runner.go:195] Run: crio --version
	I1009 19:33:20.789972  454875 ssh_runner.go:195] Run: crio --version
	I1009 19:33:20.821443  454875 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:33:20.824352  454875 cli_runner.go:164] Run: docker network inspect cert-expiration-259172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:33:20.840193  454875 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:33:20.844610  454875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:33:20.854452  454875 kubeadm.go:883] updating cluster {Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:33:20.854560  454875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:33:20.854618  454875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:33:20.888033  454875 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:33:20.888045  454875 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:33:20.888110  454875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:33:20.914277  454875 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:33:20.914289  454875 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:33:20.914295  454875 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:33:20.914377  454875 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-259172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:33:20.914456  454875 ssh_runner.go:195] Run: crio config
	I1009 19:33:20.988040  454875 cni.go:84] Creating CNI manager for ""
	I1009 19:33:20.988064  454875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:33:20.988083  454875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:33:20.988114  454875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-259172 NodeName:cert-expiration-259172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:33:20.988250  454875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-259172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:33:20.988332  454875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:33:20.996175  454875 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:33:20.996235  454875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:33:21.004825  454875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1009 19:33:21.018902  454875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:33:21.032915  454875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1009 19:33:21.046713  454875 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:33:21.050684  454875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:33:21.060664  454875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:33:21.181510  454875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:33:21.197982  454875 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172 for IP: 192.168.76.2
	I1009 19:33:21.197994  454875 certs.go:195] generating shared ca certs ...
	I1009 19:33:21.198008  454875 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:21.198182  454875 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:33:21.198230  454875 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:33:21.198236  454875 certs.go:257] generating profile certs ...
	I1009 19:33:21.198302  454875 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.key
	I1009 19:33:21.198314  454875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt with IP's: []
	I1009 19:33:21.774068  454875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt ...
	I1009 19:33:21.774084  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt: {Name:mk3ab17a25e8e728114875e4b4dda201c2beeec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:21.774291  454875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.key ...
	I1009 19:33:21.774299  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.key: {Name:mka50b8c2ad287542113d96be2edd7ab97f3f9c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:21.774406  454875 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b
	I1009 19:33:21.774419  454875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:33:21.983940  454875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b ...
	I1009 19:33:21.983954  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b: {Name:mk4f66d1d878208061ab842c22d612743a720728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:21.984144  454875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b ...
	I1009 19:33:21.984152  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b: {Name:mk710929fdd0ad36525027cd3c0e3a801b10547e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:21.984233  454875 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt
	I1009 19:33:21.984318  454875 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key
	I1009 19:33:21.984373  454875 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key
	I1009 19:33:21.984385  454875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt with IP's: []
	I1009 19:33:22.282088  454875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt ...
	I1009 19:33:22.282105  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt: {Name:mkcba953cce8f79d8e16d386042f318b0ba0fdbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:22.282302  454875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key ...
	I1009 19:33:22.282309  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key: {Name:mk6980b45bcb047c24048d689aa63f372d567a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:22.282504  454875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:33:22.282538  454875 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:33:22.282545  454875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:33:22.282568  454875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:33:22.282588  454875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:33:22.282609  454875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:33:22.282649  454875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:33:22.283230  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:33:22.302946  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:33:22.322501  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:33:22.340145  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:33:22.358544  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 19:33:22.375590  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:33:22.393090  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:33:22.410241  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:33:22.427841  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:33:22.446699  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:33:22.463857  454875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:33:22.481619  454875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:33:22.494284  454875 ssh_runner.go:195] Run: openssl version
	I1009 19:33:22.500675  454875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:33:22.509432  454875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:33:22.513421  454875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:33:22.513484  454875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:33:22.554913  454875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:33:22.563442  454875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:33:22.571913  454875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:33:22.576151  454875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:33:22.576215  454875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:33:22.617095  454875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:33:22.628005  454875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:33:22.636277  454875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:33:22.640554  454875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:33:22.640608  454875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:33:22.683139  454875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:33:22.691609  454875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:33:22.695518  454875 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:33:22.695572  454875 kubeadm.go:400] StartCluster: {Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:33:22.695632  454875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:33:22.695692  454875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:33:22.725837  454875 cri.go:89] found id: ""
	I1009 19:33:22.725908  454875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:33:22.733595  454875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:33:22.741263  454875 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:33:22.741319  454875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:33:22.749044  454875 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:33:22.749052  454875 kubeadm.go:157] found existing configuration files:
	
	I1009 19:33:22.749107  454875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:33:22.756787  454875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:33:22.756849  454875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:33:22.764150  454875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:33:22.771878  454875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:33:22.771936  454875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:33:22.779653  454875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:33:22.787444  454875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:33:22.787498  454875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:33:22.795302  454875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:33:22.803682  454875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:33:22.803737  454875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:33:22.811198  454875 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:33:22.851510  454875 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:33:22.851563  454875 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:33:22.878964  454875 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:33:22.879030  454875 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:33:22.879066  454875 kubeadm.go:318] OS: Linux
	I1009 19:33:22.879112  454875 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:33:22.879162  454875 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:33:22.879211  454875 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:33:22.879259  454875 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:33:22.879308  454875 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:33:22.879356  454875 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:33:22.879403  454875 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:33:22.879452  454875 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:33:22.879499  454875 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:33:22.947697  454875 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:33:22.947825  454875 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:33:22.947937  454875 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:33:22.958539  454875 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:33:22.961940  454875 out.go:252]   - Generating certificates and keys ...
	I1009 19:33:22.962032  454875 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:33:22.962163  454875 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:33:23.604440  454875 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:33:24.241206  454875 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:33:24.834558  454875 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:33:25.399342  454875 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:33:25.988578  454875 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:33:25.988927  454875 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-259172 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:33:26.297150  454875 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:33:26.297506  454875 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-259172 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:33:27.761655  454875 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:33:28.346482  454875 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:33:29.344858  454875 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:33:29.345136  454875 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:33:30.340162  454875 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:33:31.845505  454875 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:33:32.411476  454875 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:33:33.200384  454875 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:33:33.694926  454875 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:33:33.695880  454875 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:33:33.700743  454875 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:33:33.704279  454875 out.go:252]   - Booting up control plane ...
	I1009 19:33:33.704387  454875 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:33:33.704479  454875 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:33:33.705166  454875 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:33:33.720532  454875 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:33:33.720645  454875 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:33:33.728532  454875 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:33:33.728823  454875 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:33:33.728867  454875 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:33:33.868302  454875 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:33:33.868425  454875 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:33:35.370496  454875 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500898092s
	I1009 19:33:35.372322  454875 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:33:35.372410  454875 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 19:33:35.372502  454875 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:33:35.372606  454875 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:33:38.276744  454875 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.903448287s
	I1009 19:33:39.657385  454875 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.284990353s
	I1009 19:33:40.874927  454875 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502231153s
	I1009 19:33:40.897255  454875 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:33:40.914197  454875 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:33:40.931910  454875 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:33:40.932131  454875 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-259172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:33:40.945834  454875 kubeadm.go:318] [bootstrap-token] Using token: tgjckp.eslfk8icssk2yk7t
	I1009 19:33:40.948724  454875 out.go:252]   - Configuring RBAC rules ...
	I1009 19:33:40.948845  454875 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:33:40.954111  454875 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:33:40.965655  454875 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:33:40.970650  454875 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:33:40.974873  454875 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:33:40.979482  454875 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:33:41.282530  454875 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:33:41.717364  454875 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:33:42.282556  454875 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:33:42.283880  454875 kubeadm.go:318] 
	I1009 19:33:42.283952  454875 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:33:42.283957  454875 kubeadm.go:318] 
	I1009 19:33:42.284037  454875 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:33:42.284048  454875 kubeadm.go:318] 
	I1009 19:33:42.284074  454875 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:33:42.284134  454875 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:33:42.284186  454875 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:33:42.284192  454875 kubeadm.go:318] 
	I1009 19:33:42.284247  454875 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:33:42.284251  454875 kubeadm.go:318] 
	I1009 19:33:42.284300  454875 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:33:42.284304  454875 kubeadm.go:318] 
	I1009 19:33:42.284358  454875 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:33:42.284436  454875 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:33:42.284506  454875 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:33:42.284511  454875 kubeadm.go:318] 
	I1009 19:33:42.284598  454875 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:33:42.284677  454875 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:33:42.284681  454875 kubeadm.go:318] 
	I1009 19:33:42.284768  454875 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tgjckp.eslfk8icssk2yk7t \
	I1009 19:33:42.284874  454875 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:33:42.284895  454875 kubeadm.go:318] 	--control-plane 
	I1009 19:33:42.284901  454875 kubeadm.go:318] 
	I1009 19:33:42.284999  454875 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:33:42.285002  454875 kubeadm.go:318] 
	I1009 19:33:42.285087  454875 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tgjckp.eslfk8icssk2yk7t \
	I1009 19:33:42.285192  454875 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:33:42.288650  454875 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:33:42.288876  454875 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:33:42.289025  454875 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:33:42.289041  454875 cni.go:84] Creating CNI manager for ""
	I1009 19:33:42.289049  454875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:33:42.294168  454875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:33:42.297073  454875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:33:42.301639  454875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:33:42.301651  454875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:33:42.316428  454875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:33:42.617255  454875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:33:42.617353  454875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:33:42.617422  454875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-259172 minikube.k8s.io/updated_at=2025_10_09T19_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=cert-expiration-259172 minikube.k8s.io/primary=true
	I1009 19:33:42.825958  454875 ops.go:34] apiserver oom_adj: -16
	I1009 19:33:42.825993  454875 kubeadm.go:1113] duration metric: took 208.701637ms to wait for elevateKubeSystemPrivileges
	I1009 19:33:42.826006  454875 kubeadm.go:402] duration metric: took 20.130437282s to StartCluster
	I1009 19:33:42.826020  454875 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:42.826079  454875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:33:42.826717  454875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:33:42.826934  454875 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:33:42.827011  454875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:33:42.827250  454875 config.go:182] Loaded profile config "cert-expiration-259172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:33:42.827284  454875 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:33:42.827337  454875 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-259172"
	I1009 19:33:42.827350  454875 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-259172"
	I1009 19:33:42.827368  454875 host.go:66] Checking if "cert-expiration-259172" exists ...
	I1009 19:33:42.827833  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:33:42.828339  454875 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-259172"
	I1009 19:33:42.828354  454875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-259172"
	I1009 19:33:42.828605  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:33:42.830295  454875 out.go:179] * Verifying Kubernetes components...
	I1009 19:33:42.833246  454875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:33:42.884644  454875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:33:42.887660  454875 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:33:42.887670  454875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:33:42.887728  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:42.888389  454875 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-259172"
	I1009 19:33:42.888414  454875 host.go:66] Checking if "cert-expiration-259172" exists ...
	I1009 19:33:42.888817  454875 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:33:42.911848  454875 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:33:42.911900  454875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:33:42.911979  454875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:33:42.954260  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:42.954824  454875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:33:43.174996  454875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:33:43.175212  454875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:33:43.183207  454875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:33:43.270842  454875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:33:43.603161  454875 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 19:33:43.604778  454875 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:33:43.604823  454875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:33:43.766361  454875 api_server.go:72] duration metric: took 939.40415ms to wait for apiserver process to appear ...
	I1009 19:33:43.766372  454875 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:33:43.766386  454875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:33:43.783724  454875 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:33:43.784835  454875 api_server.go:141] control plane version: v1.34.1
	I1009 19:33:43.784851  454875 api_server.go:131] duration metric: took 18.47361ms to wait for apiserver health ...
	I1009 19:33:43.784858  454875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:33:43.788551  454875 system_pods.go:59] 5 kube-system pods found
	I1009 19:33:43.788572  454875 system_pods.go:61] "etcd-cert-expiration-259172" [83075f2c-d6c0-4fe9-852c-8769d7bf3efe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:33:43.788581  454875 system_pods.go:61] "kube-apiserver-cert-expiration-259172" [07ad064e-6bee-42b3-a46a-36d2c59f7fc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:33:43.788588  454875 system_pods.go:61] "kube-controller-manager-cert-expiration-259172" [cf12e9f6-9dd1-4220-9695-fe84ff963087] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:33:43.788595  454875 system_pods.go:61] "kube-scheduler-cert-expiration-259172" [52b35bc5-4c21-4ecf-a0a2-e041d93ceded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:33:43.788599  454875 system_pods.go:61] "storage-provisioner" [da951c86-f8b2-4a51-a4e7-21729f3a61a8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:33:43.788607  454875 system_pods.go:74] duration metric: took 3.744413ms to wait for pod list to return data ...
	I1009 19:33:43.788619  454875 kubeadm.go:586] duration metric: took 961.665283ms to wait for: map[apiserver:true system_pods:true]
	I1009 19:33:43.788630  454875 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:33:43.791828  454875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:33:43.791847  454875 node_conditions.go:123] node cpu capacity is 2
	I1009 19:33:43.791858  454875 node_conditions.go:105] duration metric: took 3.22457ms to run NodePressure ...
	I1009 19:33:43.791869  454875 start.go:241] waiting for startup goroutines ...
	I1009 19:33:43.791959  454875 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:33:43.794761  454875 addons.go:514] duration metric: took 967.460901ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:33:44.106872  454875 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-259172" context rescaled to 1 replicas
	I1009 19:33:44.106894  454875 start.go:246] waiting for cluster config update ...
	I1009 19:33:44.106907  454875 start.go:255] writing updated cluster config ...
	I1009 19:33:44.107236  454875 ssh_runner.go:195] Run: rm -f paused
	I1009 19:33:44.167591  454875 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:33:44.170821  454875 out.go:179] * Done! kubectl is now configured to use "cert-expiration-259172" cluster and "default" namespace by default
	I1009 19:35:04.380785  450527 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	I1009 19:35:04.383287  450527 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	I1009 19:35:04.386549  450527 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	I1009 19:35:04.386570  450527 kubeadm.go:318] 
	I1009 19:35:04.386666  450527 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:35:04.386753  450527 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:35:04.386844  450527 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:35:04.386942  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:35:04.387030  450527 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:35:04.387118  450527 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:35:04.387123  450527 kubeadm.go:318] 
	I1009 19:35:04.390876  450527 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:35:04.391125  450527 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:35:04.391243  450527 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:35:04.391883  450527 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:35:04.391963  450527 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:35:04.392024  450527 kubeadm.go:402] duration metric: took 8m12.230772173s to StartCluster
	I1009 19:35:04.392062  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:35:04.392130  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:35:04.426623  450527 cri.go:89] found id: ""
	I1009 19:35:04.426659  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.426669  450527 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:35:04.426676  450527 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:35:04.426739  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:35:04.452095  450527 cri.go:89] found id: ""
	I1009 19:35:04.452119  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.452128  450527 logs.go:284] No container was found matching "etcd"
	I1009 19:35:04.452135  450527 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:35:04.452201  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:35:04.477106  450527 cri.go:89] found id: ""
	I1009 19:35:04.477130  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.477143  450527 logs.go:284] No container was found matching "coredns"
	I1009 19:35:04.477150  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:35:04.477215  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:35:04.503218  450527 cri.go:89] found id: ""
	I1009 19:35:04.503242  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.503251  450527 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:35:04.503258  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:35:04.503318  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:35:04.528510  450527 cri.go:89] found id: ""
	I1009 19:35:04.528537  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.528546  450527 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:35:04.528552  450527 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:35:04.528613  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:35:04.557283  450527 cri.go:89] found id: ""
	I1009 19:35:04.557353  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.557378  450527 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:35:04.557404  450527 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:35:04.557513  450527 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:35:04.583331  450527 cri.go:89] found id: ""
	I1009 19:35:04.583405  450527 logs.go:282] 0 containers: []
	W1009 19:35:04.583420  450527 logs.go:284] No container was found matching "kindnet"
	I1009 19:35:04.583431  450527 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:35:04.583443  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 19:35:04.663227  450527 logs.go:123] Gathering logs for container status ...
	I1009 19:35:04.663264  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:35:04.699153  450527 logs.go:123] Gathering logs for kubelet ...
	I1009 19:35:04.699181  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:35:04.785027  450527 logs.go:123] Gathering logs for dmesg ...
	I1009 19:35:04.785062  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:35:04.802377  450527 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:35:04.802416  450527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:35:04.874279  450527 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:35:04.865361    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.866360    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.867238    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.868722    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.869255    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:35:04.865361    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.866360    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.867238    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.868722    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:04.869255    2347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1009 19:35:04.874343  450527 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:35:04.874406  450527 out.go:285] * 
	W1009 19:35:04.874480  450527 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:35:04.874601  450527 out.go:285] * 
	W1009 19:35:04.877213  450527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:35:04.882700  450527 out.go:203] 
	W1009 19:35:04.885478  450527 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001381597s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000099544s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000998211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.005685814s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:35:04.885508  450527 out.go:285] * 
	I1009 19:35:04.888670  450527 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:34:54 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:54.113085135Z" level=info msg="createCtr: removing container 3e544d285e78ce7fb04223de9ce50d89b74b0391e081f64e15835b52292defdb" id=4140952d-45a5-489c-acd7-4cfcd2938ace name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:54 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:54.113124274Z" level=info msg="createCtr: deleting container 3e544d285e78ce7fb04223de9ce50d89b74b0391e081f64e15835b52292defdb from storage" id=4140952d-45a5-489c-acd7-4cfcd2938ace name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:54 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:54.115952815Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-028248_kube-system_44a435a7d16950107343aca839ec41e2_0" id=4140952d-45a5-489c-acd7-4cfcd2938ace name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.093313472Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=0f0a0c41-5679-4d51-a1cc-b98a87c5aa14 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.094228525Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=1f2f68a4-6e81-443f-836a-1342c8cbfe8f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.095188395Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-env-028248/kube-scheduler" id=880f33ac-6620-40a5-9424-7e017a91e905 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.095426092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.099958683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.10056751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.11162203Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=880f33ac-6620-40a5-9424-7e017a91e905 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.112987287Z" level=info msg="createCtr: deleting container ID f699355d8d8a9df38e85437d5592a23a1c9ec5c86c97280964f5a098fe7a3551 from idIndex" id=880f33ac-6620-40a5-9424-7e017a91e905 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.113140995Z" level=info msg="createCtr: removing container f699355d8d8a9df38e85437d5592a23a1c9ec5c86c97280964f5a098fe7a3551" id=880f33ac-6620-40a5-9424-7e017a91e905 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.1132442Z" level=info msg="createCtr: deleting container f699355d8d8a9df38e85437d5592a23a1c9ec5c86c97280964f5a098fe7a3551 from storage" id=880f33ac-6620-40a5-9424-7e017a91e905 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:34:58 force-systemd-env-028248 crio[835]: time="2025-10-09T19:34:58.123138692Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-env-028248_kube-system_45e09f500a548e01fe0e0eef8911ddb6_0" id=880f33ac-6620-40a5-9424-7e017a91e905 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.093714342Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=fb7fbe44-9931-4598-a0c8-d1a20321721d name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.094899486Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a16ca9a7-a78e-49e2-a370-f187d0906279 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.096068737Z" level=info msg="Creating container: kube-system/etcd-force-systemd-env-028248/etcd" id=2c108e77-abd0-46f1-b7ee-9c722e3e1de1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.096341174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.101219878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.101922876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.113691619Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2c108e77-abd0-46f1-b7ee-9c722e3e1de1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.115283906Z" level=info msg="createCtr: deleting container ID 6149c92b724998b615a98d6f7aac23044988e0be342fcc6fe65d79d6460b5831 from idIndex" id=2c108e77-abd0-46f1-b7ee-9c722e3e1de1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.115465831Z" level=info msg="createCtr: removing container 6149c92b724998b615a98d6f7aac23044988e0be342fcc6fe65d79d6460b5831" id=2c108e77-abd0-46f1-b7ee-9c722e3e1de1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.115562456Z" level=info msg="createCtr: deleting container 6149c92b724998b615a98d6f7aac23044988e0be342fcc6fe65d79d6460b5831 from storage" id=2c108e77-abd0-46f1-b7ee-9c722e3e1de1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:35:01 force-systemd-env-028248 crio[835]: time="2025-10-09T19:35:01.118875146Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-028248_kube-system_6bca48c8435f3cc55af06c0dd8de8b31_0" id=2c108e77-abd0-46f1-b7ee-9c722e3e1de1 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:35:05.966494    2448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:05.967420    2448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:05.968979    2448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:05.969299    2448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:35:05.970756    2448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +4.128207] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:35:06 up  2:17,  0 user,  load average: 1.38, 1.11, 1.66
	Linux force-systemd-env-028248 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:34:54 force-systemd-env-028248 kubelet[1763]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-028248_kube-system(44a435a7d16950107343aca839ec41e2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:34:54 force-systemd-env-028248 kubelet[1763]:  > logger="UnhandledError"
	Oct 09 19:34:54 force-systemd-env-028248 kubelet[1763]: E1009 19:34:54.116540    1763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-028248" podUID="44a435a7d16950107343aca839ec41e2"
	Oct 09 19:34:54 force-systemd-env-028248 kubelet[1763]: E1009 19:34:54.151625    1763 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-028248\" not found"
	Oct 09 19:34:57 force-systemd-env-028248 kubelet[1763]: E1009 19:34:57.420038    1763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-028248.186ce979e407aa60  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-028248,UID:force-systemd-env-028248,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-028248 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-028248,},FirstTimestamp:2025-10-09 19:31:04.121043552 +0000 UTC m=+0.744096999,LastTimestamp:2025-10-09 19:31:04.121043552 +0000 UTC m=+0.744096999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-028248,}"
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]: E1009 19:34:58.092551    1763 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-028248\" not found" node="force-systemd-env-028248"
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]: E1009 19:34:58.123510    1763 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]:  > podSandboxID="3704aa72fd47948f8600e4863eb15db794bc68ff5ce5b94d0e2bf2b3aca77b5b"
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]: E1009 19:34:58.123644    1763 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-env-028248_kube-system(45e09f500a548e01fe0e0eef8911ddb6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]:  > logger="UnhandledError"
	Oct 09 19:34:58 force-systemd-env-028248 kubelet[1763]: E1009 19:34:58.123679    1763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-env-028248" podUID="45e09f500a548e01fe0e0eef8911ddb6"
	Oct 09 19:35:00 force-systemd-env-028248 kubelet[1763]: E1009 19:35:00.737324    1763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-028248?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:35:00 force-systemd-env-028248 kubelet[1763]: I1009 19:35:00.913411    1763 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-028248"
	Oct 09 19:35:00 force-systemd-env-028248 kubelet[1763]: E1009 19:35:00.913789    1763 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-env-028248"
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]: E1009 19:35:01.093253    1763 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-028248\" not found" node="force-systemd-env-028248"
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]: E1009 19:35:01.119249    1763 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]:  > podSandboxID="a1e62bf28383ff18e44d6a83c3b828caac0fca297973e1443883abd21c5c60ad"
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]: E1009 19:35:01.119504    1763 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]:         container etcd start failed in pod etcd-force-systemd-env-028248_kube-system(6bca48c8435f3cc55af06c0dd8de8b31): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]:  > logger="UnhandledError"
	Oct 09 19:35:01 force-systemd-env-028248 kubelet[1763]: E1009 19:35:01.119549    1763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-028248" podUID="6bca48c8435f3cc55af06c0dd8de8b31"
	Oct 09 19:35:04 force-systemd-env-028248 kubelet[1763]: E1009 19:35:04.152211    1763 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-028248\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-028248 -n force-systemd-env-028248
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-028248 -n force-systemd-env-028248: exit status 6 (345.750158ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:35:06.442311  457848 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-028248" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-028248" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-028248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-028248
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-028248: (1.961648608s)
--- FAIL: TestForceSystemdEnv (510.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-141121 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-141121 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-5jxdx" [88194336-6480-42b8-86a4-b2ceb15ccef1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1009 18:38:57.916362  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:41:14.045222  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:41:41.758550  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:46:14.045297  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-141121 -n functional-141121
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-09 18:48:17.986678677 +0000 UTC m=+1281.797391385
functional_test.go:1645: (dbg) Run:  kubectl --context functional-141121 describe po hello-node-connect-7d85dfc575-5jxdx -n default
functional_test.go:1645: (dbg) kubectl --context functional-141121 describe po hello-node-connect-7d85dfc575-5jxdx -n default:
Name:             hello-node-connect-7d85dfc575-5jxdx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-141121/192.168.49.2
Start Time:       Thu, 09 Oct 2025 18:38:17 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m2kv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6m2kv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5jxdx to functional-141121
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-141121 logs hello-node-connect-7d85dfc575-5jxdx -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-141121 logs hello-node-connect-7d85dfc575-5jxdx -n default: exit status 1 (89.183616ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-5jxdx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-141121 logs hello-node-connect-7d85dfc575-5jxdx -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-141121 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-5jxdx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-141121/192.168.49.2
Start Time:       Thu, 09 Oct 2025 18:38:17 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m2kv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6m2kv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5jxdx to functional-141121
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-141121 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-141121 logs -l app=hello-node-connect: exit status 1 (85.021213ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-5jxdx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-141121 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-141121 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.150.43
IPs:                      10.111.150.43
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31205/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-141121
helpers_test.go:243: (dbg) docker inspect functional-141121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f",
	        "Created": "2025-10-09T18:35:32.717487556Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:35:32.783607614Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f/hosts",
	        "LogPath": "/var/lib/docker/containers/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f-json.log",
	        "Name": "/functional-141121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-141121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-141121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f",
	                "LowerDir": "/var/lib/docker/overlay2/e56568a0f9f9ebb6b9d8f3bb89486fe928e15c50a8ea2a6b2f8aa2e875a133d3-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e56568a0f9f9ebb6b9d8f3bb89486fe928e15c50a8ea2a6b2f8aa2e875a133d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e56568a0f9f9ebb6b9d8f3bb89486fe928e15c50a8ea2a6b2f8aa2e875a133d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e56568a0f9f9ebb6b9d8f3bb89486fe928e15c50a8ea2a6b2f8aa2e875a133d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-141121",
	                "Source": "/var/lib/docker/volumes/functional-141121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-141121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-141121",
	                "name.minikube.sigs.k8s.io": "functional-141121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "398915f33703a28a48cce4ae5569b99b6384a610f31ca204323e9cea94b83b0f",
	            "SandboxKey": "/var/run/docker/netns/398915f33703",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-141121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:93:db:3a:a2:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a5f7165537e2d8e8a89e63b641313d7dd628dac5338528917995077faaa97206",
	                    "EndpointID": "30b32e655c4752a3132277c99ecd8284a87f131c9513aeb9278bf8ef6322821e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-141121",
	                        "d657895ac596"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-141121 -n functional-141121
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 logs -n 25: (1.583892865s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-141121 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:47 UTC │
	│ ssh            │ functional-141121 ssh -- ls -la /mount-9p                                                                          │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:47 UTC │
	│ ssh            │ functional-141121 ssh sudo umount -f /mount-9p                                                                     │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ mount          │ -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount2 --alsologtostderr -v=1 │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ mount          │ -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount1 --alsologtostderr -v=1 │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ mount          │ -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount3 --alsologtostderr -v=1 │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ ssh            │ functional-141121 ssh findmnt -T /mount1                                                                           │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ ssh            │ functional-141121 ssh findmnt -T /mount1                                                                           │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:47 UTC │
	│ ssh            │ functional-141121 ssh findmnt -T /mount2                                                                           │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:47 UTC │
	│ ssh            │ functional-141121 ssh findmnt -T /mount3                                                                           │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:47 UTC │
	│ mount          │ -p functional-141121 --kill=true                                                                                   │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ start          │ -p functional-141121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ start          │ -p functional-141121 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ start          │ -p functional-141121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-141121 --alsologtostderr -v=1                                                     │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:47 UTC │ 09 Oct 25 18:48 UTC │
	│ update-context │ functional-141121 update-context --alsologtostderr -v=2                                                            │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ update-context │ functional-141121 update-context --alsologtostderr -v=2                                                            │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ update-context │ functional-141121 update-context --alsologtostderr -v=2                                                            │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ image          │ functional-141121 image ls --format short --alsologtostderr                                                        │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ image          │ functional-141121 image ls --format yaml --alsologtostderr                                                         │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ ssh            │ functional-141121 ssh pgrep buildkitd                                                                              │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │                     │
	│ image          │ functional-141121 image build -t localhost/my-image:functional-141121 testdata/build --alsologtostderr             │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ image          │ functional-141121 image ls                                                                                         │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ image          │ functional-141121 image ls --format json --alsologtostderr                                                         │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	│ image          │ functional-141121 image ls --format table --alsologtostderr                                                        │ functional-141121 │ jenkins │ v1.37.0 │ 09 Oct 25 18:48 UTC │ 09 Oct 25 18:48 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:47:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:47:57.881283  313702 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:47:57.881480  313702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:57.881494  313702 out.go:374] Setting ErrFile to fd 2...
	I1009 18:47:57.881499  313702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:57.883084  313702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:47:57.883588  313702 out.go:368] Setting JSON to false
	I1009 18:47:57.884723  313702 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5429,"bootTime":1760030249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:47:57.884796  313702 start.go:141] virtualization:  
	I1009 18:47:57.888094  313702 out.go:179] * [functional-141121] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1009 18:47:57.891844  313702 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:47:57.891921  313702 notify.go:220] Checking for updates...
	I1009 18:47:57.894797  313702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:47:57.897961  313702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:47:57.900737  313702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:47:57.903606  313702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:47:57.906462  313702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:47:57.909676  313702 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:47:57.910281  313702 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:47:57.940213  313702 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:47:57.940340  313702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:47:58.015181  313702 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 18:47:58.002903402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:47:58.015292  313702 docker.go:318] overlay module found
	I1009 18:47:58.018442  313702 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:47:58.021331  313702 start.go:305] selected driver: docker
	I1009 18:47:58.021354  313702 start.go:925] validating driver "docker" against &{Name:functional-141121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-141121 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:58.021473  313702 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:47:58.025077  313702 out.go:203] 
	W1009 18:47:58.028005  313702 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:47:58.031097  313702 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.2549416Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=73d5ebbc-cc4e-4867-8d7e-ab89f71ea58c name=/runtime.v1.ImageService/PullImage
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.255664128Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dd8a3f6b-953a-475a-9fdb-1c945f9b2a91 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.259620235Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=885600ed-016c-4ad0-8d5b-187bab2cb661 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.260715068Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d8664516-a65c-440e-a978-9dca77a09ee4 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.262651841Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.267628015Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5n2pn/kubernetes-dashboard" id=b58783e1-9241-4e1e-8e5e-77591902340c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.268455111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.273842048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.274263978Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b3ce3881f7df597cd300f9a84cfd63d29f0a4835a1182c1c84411d90cb13adbe/merged/etc/group: no such file or directory"
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.274724324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.289914995Z" level=info msg="Created container 8e660c381463d03c852295d9231a4ed1993cb0a8407f4748b68ac18b0ecbf5c9: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5n2pn/kubernetes-dashboard" id=b58783e1-9241-4e1e-8e5e-77591902340c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.293780541Z" level=info msg="Starting container: 8e660c381463d03c852295d9231a4ed1993cb0a8407f4748b68ac18b0ecbf5c9" id=1709876c-b8eb-4ac9-98a9-2fda473ada64 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.29582998Z" level=info msg="Started container" PID=6855 containerID=8e660c381463d03c852295d9231a4ed1993cb0a8407f4748b68ac18b0ecbf5c9 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5n2pn/kubernetes-dashboard id=1709876c-b8eb-4ac9-98a9-2fda473ada64 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5cd99916bc319382c32988ea46b2246b28320162eaf0dd7fdac4bd05609b796f
	Oct 09 18:48:04 functional-141121 crio[3540]: time="2025-10-09T18:48:04.520444385Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.562733885Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=d8664516-a65c-440e-a978-9dca77a09ee4 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.56338945Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e04740bb-92e8-4619-a464-d39e24c1945e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.565167288Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=1d3c0e11-5b11-48fa-998e-ba9c7b85f9a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.57116741Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-9clq4/dashboard-metrics-scraper" id=145e35c5-010b-425a-88f0-369cb9dc1154 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.572469302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.579520565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.579876048Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee2619fe27c78ef07a2443e827a3b844d6482499daebdeab9c419b8543bcacb7/merged/etc/group: no such file or directory"
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.580322069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.595836141Z" level=info msg="Created container a8a94200f1e118761e75bb05e78af4b75b87f32bef5a50390ada5afd67655a42: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-9clq4/dashboard-metrics-scraper" id=145e35c5-010b-425a-88f0-369cb9dc1154 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.597046061Z" level=info msg="Starting container: a8a94200f1e118761e75bb05e78af4b75b87f32bef5a50390ada5afd67655a42" id=6663bebc-f478-4aa2-9028-d31b8fdf83cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 18:48:05 functional-141121 crio[3540]: time="2025-10-09T18:48:05.599446725Z" level=info msg="Started container" PID=6900 containerID=a8a94200f1e118761e75bb05e78af4b75b87f32bef5a50390ada5afd67655a42 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-9clq4/dashboard-metrics-scraper id=6663bebc-f478-4aa2-9028-d31b8fdf83cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=1594fe7048ae141194220a68b9a5d74db9ea855cc897485cd4b2a01f656e999e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a8a94200f1e11       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   13 seconds ago      Running             dashboard-metrics-scraper   0                   1594fe7048ae1       dashboard-metrics-scraper-77bf4d6c4c-9clq4   kubernetes-dashboard
	8e660c381463d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         15 seconds ago      Running             kubernetes-dashboard        0                   5cd99916bc319       kubernetes-dashboard-855c9754f9-5n2pn        kubernetes-dashboard
	fce7e91c0184c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              29 seconds ago      Exited              mount-munger                0                   97a5bacb54161       busybox-mount                                default
	fa099979befc0       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a                  10 minutes ago      Running             myfrontend                  0                   a1550b160d5d6       sp-pod                                       default
	3531116c7bb1a       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                  10 minutes ago      Running             nginx                       0                   11bb525d162f9       nginx-svc                                    default
	c0df06a71c4bc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         2                   8de7d4a188ff7       storage-provisioner                          kube-system
	6ac2c9857c6e0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   39050c970b58f       coredns-66bc5c9577-6r4d8                     kube-system
	bcc46db98f189       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   8391653122d50       coredns-66bc5c9577-qmkfh                     kube-system
	755af95cadeac       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  2                   cbae895a4cc84       kube-proxy-ndqrg                             kube-system
	4212f8698a3d7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 2                   13618b5fcbe08       kindnet-tgphx                                kube-system
	5bb3e87e080eb       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   fd7945cc8d4da       kube-apiserver-functional-141121             kube-system
	eed286b13dce9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     2                   9192cf225f96c       kube-controller-manager-functional-141121    kube-system
	019bf3489b467       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              2                   ba31826a32020       kube-scheduler-functional-141121             kube-system
	8804f06e9163a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        2                   a1c3fa0d4ec24       etcd-functional-141121                       kube-system
	c6b0298a15896       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         1                   8de7d4a188ff7       storage-provisioner                          kube-system
	4ed182d4500cf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   8391653122d50       coredns-66bc5c9577-qmkfh                     kube-system
	bd06bac8bc99f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              1                   ba31826a32020       kube-scheduler-functional-141121             kube-system
	70b8095e995d9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     1                   9192cf225f96c       kube-controller-manager-functional-141121    kube-system
	b2d19dd0386f4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   39050c970b58f       coredns-66bc5c9577-6r4d8                     kube-system
	617d11e351293       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  1                   cbae895a4cc84       kube-proxy-ndqrg                             kube-system
	75c5310f56bdf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 1                   13618b5fcbe08       kindnet-tgphx                                kube-system
	60e629fd9efb5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        1                   a1c3fa0d4ec24       etcd-functional-141121                       kube-system
	
	
	==> coredns [4ed182d4500cf1717f4593ef1952999d0a11eab574e914173b97dc98a4042d74] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46782 - 50197 "HINFO IN 7908360452563715527.7131936965573614505. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018041156s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6ac2c9857c6e0350eb81e21b887ec56c62bcd7f2de9d285f20d72b6347734f15] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58858 - 26208 "HINFO IN 2583695203569613589.6306903055331035014. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025048083s
	
	
	==> coredns [b2d19dd0386f461c7616bc2dbd88faa5e6277e506ac9570c06ab6193aebaa959] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56093 - 61369 "HINFO IN 926110662476410448.7394977073290900013. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016489264s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bcc46db98f189edd1ba814485c5c0b5d7cae0276ef241235f4213240612c93e1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57139 - 56446 "HINFO IN 1546963424821371950.6155172321304058531. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013712029s
	
	
	==> describe nodes <==
	Name:               functional-141121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-141121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=functional-141121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T18_35_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 18:35:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-141121
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 18:48:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 18:48:14 +0000   Thu, 09 Oct 2025 18:35:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 18:48:14 +0000   Thu, 09 Oct 2025 18:35:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 18:48:14 +0000   Thu, 09 Oct 2025 18:35:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 18:48:14 +0000   Thu, 09 Oct 2025 18:36:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-141121
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 935b16acea454f08a2b0d9a0e8a15fcc
	  System UUID:                87492647-decc-4523-8b8f-bfa430e95c35
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xxpfr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-5jxdx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-6r4d8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-qmkfh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-141121                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-tgphx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-141121              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-141121     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-ndqrg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-141121              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-9clq4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5n2pn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-141121 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-141121 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-141121 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-141121 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-141121 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-141121 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-141121 event: Registered Node functional-141121 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-141121 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-141121 event: Registered Node functional-141121 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-141121 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-141121 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-141121 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-141121 event: Registered Node functional-141121 in Controller
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014502] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.555614] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036161] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757222] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.781088] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 14209023 ns
	[Oct 9 18:26] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 18:29] overlayfs: idmapped layers are currently not supported
	[  +0.074293] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 9 18:34] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [60e629fd9efb51f0428f88d3d6f94eb41b1b2e157fe556917e5cc0a233a3f640] <==
	{"level":"warn","ts":"2025-10-09T18:36:30.254574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:36:30.298353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:36:30.325195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:36:30.352982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:36:30.381191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:36:30.383071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:36:30.441633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52840","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T18:36:51.270545Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-09T18:36:51.270611Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-141121","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-09T18:36:51.270705Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T18:36:51.270767Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T18:36:51.423623Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T18:36:51.423713Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-09T18:36:51.423782Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-09T18:36:51.423809Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-09T18:36:51.423692Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T18:36:51.423884Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T18:36:51.423918Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-09T18:36:51.424033Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T18:36:51.424055Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T18:36:51.424066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T18:36:51.430261Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-09T18:36:51.430416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T18:36:51.430454Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-09T18:36:51.430464Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-141121","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [8804f06e9163a3ab086a390c4d1951b6f8e958c505ceab98d04aa9639eef2f87] <==
	{"level":"warn","ts":"2025-10-09T18:37:08.231577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.254511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.302147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.312182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.347117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.369081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.403977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.436408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.465442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.487164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.516651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.542054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.574745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.600911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.621591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.657564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.686010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.719319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.742320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.794332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.821380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T18:37:08.897187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52820","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T18:47:06.972891Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1130}
	{"level":"info","ts":"2025-10-09T18:47:06.995900Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1130,"took":"22.636938ms","hash":2669708941,"current-db-size-bytes":3276800,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-09T18:47:06.995961Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2669708941,"revision":1130,"compact-revision":-1}
	
	
	==> kernel <==
	 18:48:19 up  1:30,  0 user,  load average: 0.83, 0.54, 1.45
	Linux functional-141121 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4212f8698a3d7b001af0b2f180f2026e3f8958c0a8ef9a2e54f2ed3cd97d50af] <==
	I1009 18:46:11.347773       1 main.go:301] handling current node
	I1009 18:46:21.342902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:46:21.342939       1 main.go:301] handling current node
	I1009 18:46:31.347872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:46:31.347907       1 main.go:301] handling current node
	I1009 18:46:41.344153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:46:41.344260       1 main.go:301] handling current node
	I1009 18:46:51.348140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:46:51.348173       1 main.go:301] handling current node
	I1009 18:47:01.351745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:47:01.351848       1 main.go:301] handling current node
	I1009 18:47:11.347806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:47:11.347912       1 main.go:301] handling current node
	I1009 18:47:21.341183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:47:21.341223       1 main.go:301] handling current node
	I1009 18:47:31.342291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:47:31.342325       1 main.go:301] handling current node
	I1009 18:47:41.343981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:47:41.344023       1 main.go:301] handling current node
	I1009 18:47:51.341931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:47:51.341984       1 main.go:301] handling current node
	I1009 18:48:01.341254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:48:01.341287       1 main.go:301] handling current node
	I1009 18:48:11.342222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:48:11.342255       1 main.go:301] handling current node
	
	
	==> kindnet [75c5310f56bdf8c1ddf6dc6e4cc890524e149d9bb5dc8e071c9e12fd2b7bfe9b] <==
	I1009 18:36:26.560402       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 18:36:26.621185       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1009 18:36:26.621338       1 main.go:148] setting mtu 1500 for CNI 
	I1009 18:36:26.621352       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 18:36:26.621364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T18:36:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 18:36:26.817691       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 18:36:26.817710       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 18:36:26.817738       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 18:36:26.835809       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 18:36:31.418611       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 18:36:31.418664       1 metrics.go:72] Registering metrics
	I1009 18:36:31.419141       1 controller.go:711] "Syncing nftables rules"
	I1009 18:36:36.817855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:36:36.817910       1 main.go:301] handling current node
	I1009 18:36:46.822786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:36:46.822819       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bb3e87e080eb1623e0d1fc9effddcdcfd19e1fb456b329757c47832575595d3] <==
	E1009 18:37:09.938173       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 18:37:09.960511       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 18:37:09.978444       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 18:37:09.979609       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 18:37:09.997075       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 18:37:10.007056       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 18:37:10.653450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 18:37:10.701815       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 18:37:11.762326       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 18:37:12.054913       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 18:37:12.153924       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 18:37:12.163893       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 18:37:13.535491       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 18:37:13.583574       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 18:37:13.633687       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 18:37:28.265294       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.42.55"}
	I1009 18:37:37.146171       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.184.30"}
	I1009 18:37:40.745902       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.129.131"}
	E1009 18:38:09.986231       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1009 18:38:17.286096       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55222: use of closed network connection
	I1009 18:38:17.639653       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.150.43"}
	I1009 18:47:09.912940       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 18:47:59.034029       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 18:47:59.402435       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.210.21"}
	I1009 18:47:59.425426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.227.142"}
	
	
	==> kube-controller-manager [70b8095e995d9907b7d26857ec95952d397230ed4ce7addde1aebea6efa5bc4c] <==
	I1009 18:36:34.672469       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 18:36:34.672536       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 18:36:34.675226       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 18:36:34.678424       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 18:36:34.678478       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 18:36:34.678527       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:36:34.678541       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 18:36:34.678548       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 18:36:34.679987       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 18:36:34.687194       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 18:36:34.690107       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 18:36:34.700980       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 18:36:34.709218       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 18:36:34.712555       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 18:36:34.712562       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 18:36:34.712683       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 18:36:34.714969       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 18:36:34.719256       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 18:36:34.720380       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 18:36:34.720466       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 18:36:34.720500       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 18:36:34.720511       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 18:36:34.720518       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 18:36:34.723880       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 18:36:34.727122       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-controller-manager [eed286b13dce90f58b44b26ca656a079edbc904198accb572027867abeb1e5a2] <==
	I1009 18:37:13.227394       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 18:37:13.227496       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 18:37:13.227551       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 18:37:13.227563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 18:37:13.233104       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 18:37:13.233242       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 18:37:13.234734       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 18:37:13.238371       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 18:37:13.239670       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 18:37:13.239771       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 18:37:13.243186       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 18:37:13.246233       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 18:37:13.247346       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 18:37:13.248859       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 18:37:13.253673       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 18:37:13.262392       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 18:37:13.274175       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1009 18:47:59.141735       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.175252       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.201372       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.222420       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.252208       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.252687       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.269097       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1009 18:47:59.269866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [617d11e3512930e02eb6a6d70bcff380a2c1f38472048d12044d5dfad1421c38] <==
	I1009 18:36:28.932836       1 server_linux.go:53] "Using iptables proxy"
	I1009 18:36:30.400010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 18:36:31.503323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 18:36:31.503641       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 18:36:31.503743       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:36:31.764719       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:36:31.764808       1 server_linux.go:132] "Using iptables Proxier"
	I1009 18:36:31.815714       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:36:31.816086       1 server.go:527] "Version info" version="v1.34.1"
	I1009 18:36:31.816110       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:36:31.817503       1 config.go:200] "Starting service config controller"
	I1009 18:36:31.817534       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 18:36:31.830973       1 config.go:106] "Starting endpoint slice config controller"
	I1009 18:36:31.831070       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 18:36:31.831162       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 18:36:31.831191       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 18:36:31.831975       1 config.go:309] "Starting node config controller"
	I1009 18:36:31.832055       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 18:36:31.832094       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 18:36:31.919784       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 18:36:31.931826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 18:36:31.931876       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [755af95cadeacd9b9e1a222ec748fceff5dcc421feb6f3f9ca34e2fd4662eab2] <==
	I1009 18:37:11.181846       1 server_linux.go:53] "Using iptables proxy"
	I1009 18:37:11.320268       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 18:37:11.446878       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 18:37:11.446918       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 18:37:11.446987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:37:11.478226       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:37:11.478366       1 server_linux.go:132] "Using iptables Proxier"
	I1009 18:37:11.490341       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:37:11.490611       1 server.go:527] "Version info" version="v1.34.1"
	I1009 18:37:11.490626       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:37:11.491659       1 config.go:200] "Starting service config controller"
	I1009 18:37:11.491672       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 18:37:11.496205       1 config.go:106] "Starting endpoint slice config controller"
	I1009 18:37:11.496291       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 18:37:11.496333       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 18:37:11.496371       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 18:37:11.496972       1 config.go:309] "Starting node config controller"
	I1009 18:37:11.497033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 18:37:11.497064       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 18:37:11.592745       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 18:37:11.596999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 18:37:11.597024       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [019bf3489b46700ef7b3d7a78ba5b57df5b19fb526fd308278b2884825e4ea4c] <==
	I1009 18:37:08.921581       1 serving.go:386] Generated self-signed cert in-memory
	I1009 18:37:10.211723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 18:37:10.211763       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:37:10.218285       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 18:37:10.218399       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 18:37:10.218427       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 18:37:10.218488       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 18:37:10.218538       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 18:37:10.218558       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 18:37:10.218477       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:37:10.219162       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:37:10.319390       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:37:10.319498       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 18:37:10.319516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [bd06bac8bc99ff5c56d21c32e26bac97a561e91acc3e542f14d404562f94e2c3] <==
	I1009 18:36:29.073759       1 serving.go:386] Generated self-signed cert in-memory
	W1009 18:36:31.255278       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 18:36:31.255309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 18:36:31.255319       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 18:36:31.255326       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 18:36:31.388289       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 18:36:31.394216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:36:31.400213       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:36:31.404475       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:36:31.407217       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 18:36:31.412201       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 18:36:31.505674       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:36:51.287306       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1009 18:36:51.287335       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1009 18:36:51.287355       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1009 18:36:51.287375       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:36:51.287591       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1009 18:36:51.287634       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 09 18:47:27 functional-141121 kubelet[3853]: E1009 18:47:27.617720    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5jxdx" podUID="88194336-6480-42b8-86a4-b2ceb15ccef1"
	Oct 09 18:47:29 functional-141121 kubelet[3853]: E1009 18:47:29.617367    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpfr" podUID="46032cdb-f61b-4db6-823b-0840fb3cccc4"
	Oct 09 18:47:40 functional-141121 kubelet[3853]: E1009 18:47:40.617552    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpfr" podUID="46032cdb-f61b-4db6-823b-0840fb3cccc4"
	Oct 09 18:47:41 functional-141121 kubelet[3853]: E1009 18:47:41.617594    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5jxdx" podUID="88194336-6480-42b8-86a4-b2ceb15ccef1"
	Oct 09 18:47:47 functional-141121 kubelet[3853]: I1009 18:47:47.457098    3853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcjgc\" (UniqueName: \"kubernetes.io/projected/b2328d16-652c-45e1-805d-490d7382e085-kube-api-access-mcjgc\") pod \"busybox-mount\" (UID: \"b2328d16-652c-45e1-805d-490d7382e085\") " pod="default/busybox-mount"
	Oct 09 18:47:47 functional-141121 kubelet[3853]: I1009 18:47:47.457156    3853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b2328d16-652c-45e1-805d-490d7382e085-test-volume\") pod \"busybox-mount\" (UID: \"b2328d16-652c-45e1-805d-490d7382e085\") " pod="default/busybox-mount"
	Oct 09 18:47:47 functional-141121 kubelet[3853]: W1009 18:47:47.653114    3853 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f/crio-97a5bacb541618d0ecbf4ea36201021984e55e5042f80503b8bf4587209b0559 WatchSource:0}: Error finding container 97a5bacb541618d0ecbf4ea36201021984e55e5042f80503b8bf4587209b0559: Status 404 returned error can't find the container with id 97a5bacb541618d0ecbf4ea36201021984e55e5042f80503b8bf4587209b0559
	Oct 09 18:47:51 functional-141121 kubelet[3853]: I1009 18:47:51.683918    3853 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b2328d16-652c-45e1-805d-490d7382e085-test-volume\") pod \"b2328d16-652c-45e1-805d-490d7382e085\" (UID: \"b2328d16-652c-45e1-805d-490d7382e085\") "
	Oct 09 18:47:51 functional-141121 kubelet[3853]: I1009 18:47:51.683980    3853 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcjgc\" (UniqueName: \"kubernetes.io/projected/b2328d16-652c-45e1-805d-490d7382e085-kube-api-access-mcjgc\") pod \"b2328d16-652c-45e1-805d-490d7382e085\" (UID: \"b2328d16-652c-45e1-805d-490d7382e085\") "
	Oct 09 18:47:51 functional-141121 kubelet[3853]: I1009 18:47:51.684504    3853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2328d16-652c-45e1-805d-490d7382e085-test-volume" (OuterVolumeSpecName: "test-volume") pod "b2328d16-652c-45e1-805d-490d7382e085" (UID: "b2328d16-652c-45e1-805d-490d7382e085"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 09 18:47:51 functional-141121 kubelet[3853]: I1009 18:47:51.686311    3853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2328d16-652c-45e1-805d-490d7382e085-kube-api-access-mcjgc" (OuterVolumeSpecName: "kube-api-access-mcjgc") pod "b2328d16-652c-45e1-805d-490d7382e085" (UID: "b2328d16-652c-45e1-805d-490d7382e085"). InnerVolumeSpecName "kube-api-access-mcjgc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 09 18:47:51 functional-141121 kubelet[3853]: I1009 18:47:51.784488    3853 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b2328d16-652c-45e1-805d-490d7382e085-test-volume\") on node \"functional-141121\" DevicePath \"\""
	Oct 09 18:47:51 functional-141121 kubelet[3853]: I1009 18:47:51.784532    3853 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcjgc\" (UniqueName: \"kubernetes.io/projected/b2328d16-652c-45e1-805d-490d7382e085-kube-api-access-mcjgc\") on node \"functional-141121\" DevicePath \"\""
	Oct 09 18:47:52 functional-141121 kubelet[3853]: I1009 18:47:52.525036    3853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a5bacb541618d0ecbf4ea36201021984e55e5042f80503b8bf4587209b0559"
	Oct 09 18:47:52 functional-141121 kubelet[3853]: E1009 18:47:52.618023    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5jxdx" podUID="88194336-6480-42b8-86a4-b2ceb15ccef1"
	Oct 09 18:47:55 functional-141121 kubelet[3853]: E1009 18:47:55.617011    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpfr" podUID="46032cdb-f61b-4db6-823b-0840fb3cccc4"
	Oct 09 18:47:59 functional-141121 kubelet[3853]: I1009 18:47:59.435627    3853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97mrb\" (UniqueName: \"kubernetes.io/projected/debb1ec9-2fed-49aa-8a25-2995ce427478-kube-api-access-97mrb\") pod \"kubernetes-dashboard-855c9754f9-5n2pn\" (UID: \"debb1ec9-2fed-49aa-8a25-2995ce427478\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5n2pn"
	Oct 09 18:47:59 functional-141121 kubelet[3853]: I1009 18:47:59.435692    3853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/debb1ec9-2fed-49aa-8a25-2995ce427478-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-5n2pn\" (UID: \"debb1ec9-2fed-49aa-8a25-2995ce427478\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5n2pn"
	Oct 09 18:47:59 functional-141121 kubelet[3853]: I1009 18:47:59.536065    3853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62ng\" (UniqueName: \"kubernetes.io/projected/8195ad5a-7674-4b64-9a6a-4cada910a06d-kube-api-access-s62ng\") pod \"dashboard-metrics-scraper-77bf4d6c4c-9clq4\" (UID: \"8195ad5a-7674-4b64-9a6a-4cada910a06d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-9clq4"
	Oct 09 18:47:59 functional-141121 kubelet[3853]: I1009 18:47:59.536302    3853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8195ad5a-7674-4b64-9a6a-4cada910a06d-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-9clq4\" (UID: \"8195ad5a-7674-4b64-9a6a-4cada910a06d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-9clq4"
	Oct 09 18:47:59 functional-141121 kubelet[3853]: W1009 18:47:59.967806    3853 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d657895ac596b857bd8b9d76f3b9d18abb979c1e3413f3a7103a264e365c398f/crio-1594fe7048ae141194220a68b9a5d74db9ea855cc897485cd4b2a01f656e999e WatchSource:0}: Error finding container 1594fe7048ae141194220a68b9a5d74db9ea855cc897485cd4b2a01f656e999e: Status 404 returned error can't find the container with id 1594fe7048ae141194220a68b9a5d74db9ea855cc897485cd4b2a01f656e999e
	Oct 09 18:48:06 functional-141121 kubelet[3853]: I1009 18:48:06.596944    3853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5n2pn" podStartSLOduration=2.986332367 podStartE2EDuration="7.596922814s" podCreationTimestamp="2025-10-09 18:47:59 +0000 UTC" firstStartedPulling="2025-10-09 18:47:59.646385937 +0000 UTC m=+655.150203129" lastFinishedPulling="2025-10-09 18:48:04.256976376 +0000 UTC m=+659.760793576" observedRunningTime="2025-10-09 18:48:04.596399758 +0000 UTC m=+660.100216974" watchObservedRunningTime="2025-10-09 18:48:06.596922814 +0000 UTC m=+662.100740030"
	Oct 09 18:48:07 functional-141121 kubelet[3853]: E1009 18:48:07.616885    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5jxdx" podUID="88194336-6480-42b8-86a4-b2ceb15ccef1"
	Oct 09 18:48:08 functional-141121 kubelet[3853]: E1009 18:48:08.616538    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpfr" podUID="46032cdb-f61b-4db6-823b-0840fb3cccc4"
	Oct 09 18:48:19 functional-141121 kubelet[3853]: E1009 18:48:19.617231    3853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpfr" podUID="46032cdb-f61b-4db6-823b-0840fb3cccc4"
	
	
	==> kubernetes-dashboard [8e660c381463d03c852295d9231a4ed1993cb0a8407f4748b68ac18b0ecbf5c9] <==
	2025/10/09 18:48:04 Using namespace: kubernetes-dashboard
	2025/10/09 18:48:04 Using in-cluster config to connect to apiserver
	2025/10/09 18:48:04 Using secret token for csrf signing
	2025/10/09 18:48:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 18:48:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 18:48:04 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 18:48:04 Generating JWE encryption key
	2025/10/09 18:48:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 18:48:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 18:48:04 Initializing JWE encryption key from synchronized object
	2025/10/09 18:48:04 Creating in-cluster Sidecar client
	2025/10/09 18:48:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 18:48:04 Serving insecurely on HTTP port: 9090
	2025/10/09 18:48:04 Starting overwatch
	
	
	==> storage-provisioner [c0df06a71c4bce99e42cfdc42bbea8286d26f051caf2e89ee535c5939926f8a8] <==
	W1009 18:47:55.448854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:47:57.451728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:47:57.456645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:47:59.462186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:47:59.469185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:01.472646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:01.480058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:03.484566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:03.489521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:05.493918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:05.501117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:07.504620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:07.512358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:09.515697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:09.520386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:11.524221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:11.529804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:13.532609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:13.539755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:15.543025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:15.547620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:17.550512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:17.557578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:19.560848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:48:19.568001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c6b0298a15896aefb0b74190b311771eed60c6630afa0833201c7cfb05c59e8c] <==
	I1009 18:36:27.642805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:36:31.504767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:36:31.504907       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 18:36:31.525981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:34.981089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:39.241022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:42.840075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:45.895840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:48.917875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:48.923453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 18:36:48.923672       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:36:48.923863       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-141121_b86e040b-4f23-4ccc-bced-b34ed7d39f3a!
	I1009 18:36:48.924390       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8f67ffca-e6e9-4514-a678-1340e603953b", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-141121_b86e040b-4f23-4ccc-bced-b34ed7d39f3a became leader
	W1009 18:36:48.927371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:48.933023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 18:36:49.027962       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-141121_b86e040b-4f23-4ccc-bced-b34ed7d39f3a!
	W1009 18:36:50.936617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:36:50.941799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-141121 -n functional-141121
helpers_test.go:269: (dbg) Run:  kubectl --context functional-141121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-xxpfr hello-node-connect-7d85dfc575-5jxdx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-141121 describe pod busybox-mount hello-node-75c85bcc94-xxpfr hello-node-connect-7d85dfc575-5jxdx
helpers_test.go:290: (dbg) kubectl --context functional-141121 describe pod busybox-mount hello-node-75c85bcc94-xxpfr hello-node-connect-7d85dfc575-5jxdx:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141121/192.168.49.2
	Start Time:       Thu, 09 Oct 2025 18:47:47 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://fce7e91c0184ca2e2ac60d9c442c109026f93dd3c89b5d9a95c790f51c2c4795
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 09 Oct 2025 18:47:49 +0000
	      Finished:     Thu, 09 Oct 2025 18:47:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcjgc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-mcjgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  33s   default-scheduler  Successfully assigned default/busybox-mount to functional-141121
	  Normal  Pulling    33s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     31s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.943s (1.943s including waiting). Image size: 3774172 bytes.
	  Normal  Created    31s   kubelet            Created container: mount-munger
	  Normal  Started    31s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-xxpfr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141121/192.168.49.2
	Start Time:       Thu, 09 Oct 2025 18:37:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cz725 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cz725:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xxpfr to functional-141121
	  Normal   Pulling    7m46s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m46s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m46s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    40s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     40s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-5jxdx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141121/192.168.49.2
	Start Time:       Thu, 09 Oct 2025 18:38:17 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m2kv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6m2kv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5jxdx to functional-141121
	  Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image load --daemon kicbase/echo-server:functional-141121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-141121" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image load --daemon kicbase/echo-server:functional-141121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-141121" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-141121
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image load --daemon kicbase/echo-server:functional-141121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-141121" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-141121 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-141121 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xxpfr" [46032cdb-f61b-4db6-823b-0840fb3cccc4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-141121 -n functional-141121
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-09 18:47:37.508269039 +0000 UTC m=+1241.318981746
functional_test.go:1460: (dbg) Run:  kubectl --context functional-141121 describe po hello-node-75c85bcc94-xxpfr -n default
functional_test.go:1460: (dbg) kubectl --context functional-141121 describe po hello-node-75c85bcc94-xxpfr -n default:
Name:             hello-node-75c85bcc94-xxpfr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-141121/192.168.49.2
Start Time:       Thu, 09 Oct 2025 18:37:37 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cz725 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cz725:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xxpfr to functional-141121
Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-141121 logs hello-node-75c85bcc94-xxpfr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-141121 logs hello-node-75c85bcc94-xxpfr -n default: exit status 1 (104.589712ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-xxpfr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-141121 logs hello-node-75c85bcc94-xxpfr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image save kicbase/echo-server:functional-141121 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1009 18:37:38.810550  309704 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:37:38.810765  309704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:37:38.810801  309704 out.go:374] Setting ErrFile to fd 2...
	I1009 18:37:38.810822  309704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:37:38.811101  309704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:37:38.811740  309704 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:37:38.811910  309704 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:37:38.812414  309704 cli_runner.go:164] Run: docker container inspect functional-141121 --format={{.State.Status}}
	I1009 18:37:38.830180  309704 ssh_runner.go:195] Run: systemctl --version
	I1009 18:37:38.830237  309704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141121
	I1009 18:37:38.848769  309704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/functional-141121/id_rsa Username:docker}
	I1009 18:37:38.948785  309704 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1009 18:37:38.948850  309704 cache_images.go:254] Failed to load cached images for "functional-141121": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1009 18:37:38.948878  309704 cache_images.go:266] failed pushing to: functional-141121

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-141121
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image save --daemon kicbase/echo-server:functional-141121 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-141121
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-141121: exit status 1 (20.854989ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-141121

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-141121

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 service --namespace=default --https --url hello-node: exit status 115 (390.476318ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32617
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-141121 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 service hello-node --url --format={{.IP}}: exit status 115 (403.532683ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-141121 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 service hello-node --url: exit status 115 (400.267441ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32617
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-141121 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32617
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-732643 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-732643 --output=json --user=testUser: exit status 80 (2.532377322s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0772246-9bb2-4338-aa2a-2de07ae486e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-732643 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"00a116c5-92f8-4a78-9d55-512eca641b66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-09T19:02:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"a392b0c4-d6da-4e5a-a7ba-48a12e523a0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-732643 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-732643 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-732643 --output=json --user=testUser: exit status 80 (2.426416362s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ca55222-b794-4adc-a634-9e3d91142fd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-732643 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"8b18e174-fe38-45f1-b69e-322a58d908d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-09T19:02:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"69596b8a-0133-4fc1-aa44-f27d2772b27a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-732643 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (2.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:91: Checking cache directory: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v0.0.0
no_kubernetes_test.go:100: Cache directory exists but is empty
no_kubernetes_test.go:102: Cache directory /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v0.0.0 should not exist when using --no-kubernetes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect NoKubernetes-034324
helpers_test.go:243: (dbg) docker inspect NoKubernetes-034324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b",
	        "Created": "2025-10-09T19:21:25.712611629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 422457,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:21:25.787926263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/hosts",
	        "LogPath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b-json.log",
	        "Name": "/NoKubernetes-034324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-034324:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-034324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b",
	                "LowerDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-034324",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-034324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-034324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-034324",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-034324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc23acb0b537af0aebf3de869996c6feff6d8cf2704f4cc87c89bd483d293579",
	            "SandboxKey": "/var/run/docker/netns/dc23acb0b537",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-034324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:26:76:7e:eb:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1f92ea7988ef2d7434c33603a3ce0a65b39b302253df0e1958e280e2d6378a1",
	                    "EndpointID": "5e389f7f88c6574cd3e37e0f8c3f21b4137f07178423452985b0b12499f172b6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "NoKubernetes-034324",
	                        "d03759947593"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p NoKubernetes-034324 -n NoKubernetes-034324
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p NoKubernetes-034324 -n NoKubernetes-034324: exit status 6 (303.346461ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:21:30.751049  423642 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-034324" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-034324 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p test-preload-237313                                                                                                │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ start   │ -p test-preload-237313 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio     │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:17 UTC │
	│ image   │ test-preload-237313 image list                                                                                        │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ delete  │ -p test-preload-237313                                                                                                │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ start   │ -p scheduled-stop-891160 --memory=3072 --driver=docker  --container-runtime=crio                                      │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:18 UTC │
	│ stop    │ -p scheduled-stop-891160 --schedule 5m                                                                                │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 5m                                                                                │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 5m                                                                                │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --cancel-scheduled                                                                           │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:19 UTC │
	│ delete  │ -p scheduled-stop-891160                                                                                              │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │ 09 Oct 25 19:19 UTC │
	│ start   │ -p insufficient-storage-402794 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-402794 │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │                     │
	│ delete  │ -p insufficient-storage-402794                                                                                        │ insufficient-storage-402794 │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │ 09 Oct 25 19:19 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │                     │
	│ start   │ -p NoKubernetes-034324 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │ 09 Oct 25 19:20 UTC │
	│ start   │ -p missing-upgrade-636288 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-636288      │ jenkins │ v1.32.0 │ 09 Oct 25 19:20 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:20 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p missing-upgrade-636288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio              │ missing-upgrade-636288      │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ delete  │ -p NoKubernetes-034324                                                                                                │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:21:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:21:24.689032  422140 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:21:24.689401  422140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:24.689434  422140 out.go:374] Setting ErrFile to fd 2...
	I1009 19:21:24.689455  422140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:24.689748  422140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:21:24.690221  422140 out.go:368] Setting JSON to false
	I1009 19:21:24.691118  422140 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7436,"bootTime":1760030249,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:21:24.691214  422140 start.go:141] virtualization:  
	I1009 19:21:24.694911  422140 out.go:179] * [NoKubernetes-034324] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:21:24.699409  422140 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:21:24.699464  422140 notify.go:220] Checking for updates...
	I1009 19:21:19.953965  421414 delete.go:124] DEMOLISHING missing-upgrade-636288 ...
	I1009 19:21:19.954061  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:19.969316  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	W1009 19:21:19.969380  421414 stop.go:83] unable to get state: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:19.969400  421414 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:19.969860  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:19.985471  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:19.985554  421414 delete.go:82] Unable to get host status for missing-upgrade-636288, assuming it has already been deleted: state: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:19.985621  421414 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-636288
	W1009 19:21:20.009030  421414 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-636288 returned with exit code 1
	I1009 19:21:20.009088  421414 kic.go:371] could not find the container missing-upgrade-636288 to remove it. will try anyways
	I1009 19:21:20.009159  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:20.026389  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	W1009 19:21:20.026460  421414 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:20.026536  421414 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-636288 /bin/bash -c "sudo init 0"
	W1009 19:21:20.041897  421414 cli_runner.go:211] docker exec --privileged -t missing-upgrade-636288 /bin/bash -c "sudo init 0" returned with exit code 1
	I1009 19:21:20.041938  421414 oci.go:659] error shutdown missing-upgrade-636288: docker exec --privileged -t missing-upgrade-636288 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.042162  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:21.058360  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:21.058423  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.058435  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:21.058479  421414 retry.go:31] will retry after 532.20021ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.590895  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:21.611764  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:21.611827  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.611841  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:21.611870  421414 retry.go:31] will retry after 941.929332ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:22.554029  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:22.575641  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:22.575697  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:22.575706  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:22.575734  421414 retry.go:31] will retry after 1.053484412s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:23.629422  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:23.645927  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:23.646015  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:23.646030  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:23.646064  421414 retry.go:31] will retry after 1.409603629s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:24.704191  422140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:21:24.707749  422140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:21:24.711025  422140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:21:24.714085  422140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:21:24.717366  422140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:21:24.720990  422140 config.go:182] Loaded profile config "missing-upgrade-636288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1009 19:21:24.721059  422140 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 19:21:24.721153  422140 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:21:24.748194  422140 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:21:24.748316  422140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:24.806071  422140 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:21:24.797067397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:21:24.806207  422140 docker.go:318] overlay module found
	I1009 19:21:24.809174  422140 out.go:179] * Using the docker driver based on user configuration
	I1009 19:21:24.812133  422140 start.go:305] selected driver: docker
	I1009 19:21:24.812159  422140 start.go:925] validating driver "docker" against <nil>
	I1009 19:21:24.812173  422140 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:21:24.812884  422140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:24.865126  422140 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:21:24.856323199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:21:24.865224  422140 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 19:21:24.865296  422140 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:21:24.865517  422140 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:21:24.868595  422140 out.go:179] * Using Docker driver with root privileges
	I1009 19:21:24.871544  422140 cni.go:84] Creating CNI manager for ""
	I1009 19:21:24.871614  422140 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:21:24.871632  422140 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:21:24.871665  422140 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 19:21:24.871724  422140 start.go:349] cluster config:
	{Name:NoKubernetes-034324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-034324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:21:24.874787  422140 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-034324
	I1009 19:21:24.877658  422140 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:21:24.880571  422140 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:21:24.883405  422140 cache.go:58] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1009 19:21:24.883495  422140 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:21:24.883576  422140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/NoKubernetes-034324/config.json ...
	I1009 19:21:24.883609  422140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/NoKubernetes-034324/config.json: {Name:mk0895e18d43675eb938cf9f052976f2da003225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:24.903604  422140 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:21:24.903627  422140 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:21:24.903642  422140 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:21:24.903669  422140 start.go:360] acquireMachinesLock for NoKubernetes-034324: {Name:mkc0a235a952de849342c8d76f6deb47e2084f7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:21:24.903724  422140 start.go:364] duration metric: took 35.241µs to acquireMachinesLock for "NoKubernetes-034324"
	I1009 19:21:24.903747  422140 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-034324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-034324 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:21:24.903807  422140 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:21:24.907075  422140 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:21:24.907310  422140 start.go:159] libmachine.API.Create for "NoKubernetes-034324" (driver="docker")
	I1009 19:21:24.907355  422140 client.go:168] LocalClient.Create starting
	I1009 19:21:24.907421  422140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:21:24.907460  422140 main.go:141] libmachine: Decoding PEM data...
	I1009 19:21:24.907478  422140 main.go:141] libmachine: Parsing certificate...
	I1009 19:21:24.907550  422140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:21:24.907574  422140 main.go:141] libmachine: Decoding PEM data...
	I1009 19:21:24.907587  422140 main.go:141] libmachine: Parsing certificate...
	I1009 19:21:24.907977  422140 cli_runner.go:164] Run: docker network inspect NoKubernetes-034324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:21:24.923610  422140 cli_runner.go:211] docker network inspect NoKubernetes-034324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:21:24.923698  422140 network_create.go:284] running [docker network inspect NoKubernetes-034324] to gather additional debugging logs...
	I1009 19:21:24.923724  422140 cli_runner.go:164] Run: docker network inspect NoKubernetes-034324
	W1009 19:21:24.939881  422140 cli_runner.go:211] docker network inspect NoKubernetes-034324 returned with exit code 1
	I1009 19:21:24.939911  422140 network_create.go:287] error running [docker network inspect NoKubernetes-034324]: docker network inspect NoKubernetes-034324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-034324 not found
	I1009 19:21:24.939925  422140 network_create.go:289] output of [docker network inspect NoKubernetes-034324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-034324 not found
	
	** /stderr **
	I1009 19:21:24.940035  422140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:21:24.956881  422140 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:21:24.957276  422140 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:21:24.957530  422140 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:21:24.957949  422140 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a02840}
	I1009 19:21:24.957978  422140 network_create.go:124] attempt to create docker network NoKubernetes-034324 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 19:21:24.958037  422140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-034324 NoKubernetes-034324
	I1009 19:21:25.023838  422140 network_create.go:108] docker network NoKubernetes-034324 192.168.76.0/24 created
	I1009 19:21:25.023870  422140 kic.go:121] calculated static IP "192.168.76.2" for the "NoKubernetes-034324" container
	I1009 19:21:25.023961  422140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:21:25.040273  422140 cli_runner.go:164] Run: docker volume create NoKubernetes-034324 --label name.minikube.sigs.k8s.io=NoKubernetes-034324 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:21:25.058537  422140 oci.go:103] Successfully created a docker volume NoKubernetes-034324
	I1009 19:21:25.058621  422140 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-034324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-034324 --entrypoint /usr/bin/test -v NoKubernetes-034324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:21:25.636387  422140 oci.go:107] Successfully prepared a docker volume NoKubernetes-034324
	I1009 19:21:25.636456  422140 preload.go:178] Skipping preload logic due to --no-kubernetes flag
	W1009 19:21:25.636587  422140 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:21:25.636719  422140 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:21:25.692646  422140 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-034324 --name NoKubernetes-034324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-034324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-034324 --network NoKubernetes-034324 --ip 192.168.76.2 --volume NoKubernetes-034324:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:21:26.006079  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Running}}
	I1009 19:21:26.030874  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Status}}
	I1009 19:21:26.058634  422140 cli_runner.go:164] Run: docker exec NoKubernetes-034324 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:21:26.112502  422140 oci.go:144] the created container "NoKubernetes-034324" has a running status.
	I1009 19:21:26.112533  422140 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa...
	I1009 19:21:26.968800  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:21:26.968903  422140 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:21:26.998443  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Status}}
	I1009 19:21:27.025155  422140 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:21:27.025183  422140 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-034324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:21:27.079691  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Status}}
	I1009 19:21:27.099903  422140 machine.go:93] provisionDockerMachine start ...
	I1009 19:21:27.100018  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:27.119228  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:27.119572  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:27.119591  422140 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:21:27.278072  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-034324
	
	I1009 19:21:27.278095  422140 ubuntu.go:182] provisioning hostname "NoKubernetes-034324"
	I1009 19:21:27.278223  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:27.297528  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:27.297887  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:27.297902  422140 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-034324 && echo "NoKubernetes-034324" | sudo tee /etc/hostname
	I1009 19:21:27.464698  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-034324
	
	I1009 19:21:27.464847  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:27.484025  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:27.484338  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:27.484361  422140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-034324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-034324/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-034324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:21:27.630279  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:21:27.630308  422140 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:21:27.630332  422140 ubuntu.go:190] setting up certificates
	I1009 19:21:27.630344  422140 provision.go:84] configureAuth start
	I1009 19:21:27.630400  422140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-034324
	I1009 19:21:27.648360  422140 provision.go:143] copyHostCerts
	I1009 19:21:27.648416  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:21:27.648464  422140 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:21:27.648481  422140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:21:27.648565  422140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:21:27.649050  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:21:27.649092  422140 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:21:27.649101  422140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:21:27.649170  422140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:21:27.649285  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:21:27.649319  422140 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:21:27.649337  422140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:21:27.649374  422140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:21:27.649459  422140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-034324 san=[127.0.0.1 192.168.76.2 NoKubernetes-034324 localhost minikube]
	I1009 19:21:28.193277  422140 provision.go:177] copyRemoteCerts
	I1009 19:21:28.193351  422140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:21:28.193395  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.212838  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:28.313838  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:21:28.313898  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:21:28.332774  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:21:28.332840  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 19:21:28.351254  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:21:28.351368  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:21:28.369645  422140 provision.go:87] duration metric: took 739.277748ms to configureAuth
	I1009 19:21:28.369684  422140 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:21:28.369867  422140 config.go:182] Loaded profile config "NoKubernetes-034324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 19:21:28.370026  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.387206  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:28.387513  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:28.387534  422140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:21:28.717394  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:21:28.717415  422140 machine.go:96] duration metric: took 1.617488804s to provisionDockerMachine
	I1009 19:21:28.717426  422140 client.go:171] duration metric: took 3.81006132s to LocalClient.Create
	I1009 19:21:28.717440  422140 start.go:167] duration metric: took 3.810131491s to libmachine.API.Create "NoKubernetes-034324"
	I1009 19:21:28.717447  422140 start.go:293] postStartSetup for "NoKubernetes-034324" (driver="docker")
	I1009 19:21:28.717457  422140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:21:28.717522  422140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:21:28.717564  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.735482  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:28.838592  422140 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:21:28.842058  422140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:21:28.842085  422140 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:21:28.842097  422140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:21:28.842183  422140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:21:28.842297  422140 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:21:28.842308  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /etc/ssl/certs/2863092.pem
	I1009 19:21:28.842411  422140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:21:28.849980  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:21:28.867298  422140 start.go:296] duration metric: took 149.831302ms for postStartSetup
	I1009 19:21:28.867663  422140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-034324
	I1009 19:21:28.884754  422140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/NoKubernetes-034324/config.json ...
	I1009 19:21:28.885055  422140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:21:28.885115  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.903708  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:29.003925  422140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:21:29.009515  422140 start.go:128] duration metric: took 4.105691927s to createHost
	I1009 19:21:29.009539  422140 start.go:83] releasing machines lock for "NoKubernetes-034324", held for 4.105802033s
	I1009 19:21:29.009614  422140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-034324
	I1009 19:21:29.028454  422140 ssh_runner.go:195] Run: cat /version.json
	I1009 19:21:29.028513  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:29.028786  422140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:21:29.028860  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:29.054046  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:29.058509  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:29.154317  422140 ssh_runner.go:195] Run: systemctl --version
	I1009 19:21:29.262609  422140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:21:29.297399  422140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:21:29.301871  422140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:21:29.301945  422140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:21:29.331894  422140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:21:29.331959  422140 start.go:495] detecting cgroup driver to use...
	I1009 19:21:29.332007  422140 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:21:29.332099  422140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:21:29.349639  422140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:21:29.362948  422140 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:21:29.363018  422140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:21:29.380504  422140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:21:29.399628  422140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:21:29.518696  422140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:21:29.639708  422140 docker.go:234] disabling docker service ...
	I1009 19:21:29.639818  422140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:21:29.664880  422140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:21:29.678563  422140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:21:25.056666  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:25.075502  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:25.075654  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:25.075732  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:25.075765  421414 retry.go:31] will retry after 2.146973949s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:27.222927  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:27.239895  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:27.239967  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:27.239978  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:27.240004  421414 retry.go:31] will retry after 2.640429816s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:29.796700  422140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:21:29.926978  422140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:21:29.940732  422140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:21:29.955056  422140 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 19:21:29.955098  422140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 19:21:29.955149  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.964216  422140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:21:29.964280  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.973339  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.982196  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.991395  422140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:21:29.999656  422140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:21:30.025592  422140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:21:30.045828  422140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:21:30.181984  422140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:21:30.307140  422140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:21:30.307211  422140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:21:30.311066  422140 start.go:563] Will wait 60s for crictl version
	I1009 19:21:30.311130  422140 ssh_runner.go:195] Run: which crictl
	I1009 19:21:30.314703  422140 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:21:30.347134  422140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:21:30.347237  422140 ssh_runner.go:195] Run: crio --version
	I1009 19:21:30.374535  422140 ssh_runner.go:195] Run: crio --version
	I1009 19:21:30.407274  422140 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1009 19:21:30.410469  422140 ssh_runner.go:195] Run: rm -f paused
	I1009 19:21:30.417002  422140 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 19:21:30.421885  422140 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.292952594Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.293021715Z" level=info msg="No blockio config file specified, blockio not configured"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.293077748Z" level=info msg="RDT not available in the host system"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.293155994Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294008559Z" level=info msg="Conmon does support the --sync option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294109992Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294268017Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294965052Z" level=info msg="Conmon does support the --sync option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.295059609Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.295299735Z" level=info msg="Updated default CNI network name to "
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.296078215Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/o
ci/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mapp
ings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtim
e.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.9\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    m
etrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins
\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.296753047Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.296923839Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.301431879Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.301935516Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.301958672Z" level=info msg="Starting seccomp notifier watcher"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.30200604Z" level=info msg="Create NRI interface"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302098021Z" level=info msg="built-in NRI default validator is disabled"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302112356Z" level=info msg="runtime interface created"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302123219Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302164927Z" level=info msg="runtime interface starting up..."
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302171433Z" level=info msg="starting plugins..."
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302184718Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302250852Z" level=info msg="No systemd watchdog enabled"
	Oct 09 19:21:30 NoKubernetes-034324 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Oct 9 18:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:54] overlayfs: idmapped layers are currently not supported
	[  +3.829072] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:56] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:57] overlayfs: idmapped layers are currently not supported
	[  +4.128207] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:21:31 up  2:04,  0 user,  load average: 2.35, 2.01, 1.99
	Linux NoKubernetes-034324 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p NoKubernetes-034324 -n NoKubernetes-034324
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p NoKubernetes-034324 -n NoKubernetes-034324: exit status 6 (318.923253ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:21:31.727080  423843 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-034324" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-034324" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect NoKubernetes-034324
helpers_test.go:243: (dbg) docker inspect NoKubernetes-034324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b",
	        "Created": "2025-10-09T19:21:25.712611629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 422457,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:21:25.787926263Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/hosts",
	        "LogPath": "/var/lib/docker/containers/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b/d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b-json.log",
	        "Name": "/NoKubernetes-034324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-034324:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-034324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0375994759344492603e60292c3525703e8bb5ab9c52f16473d395db3e5836b",
	                "LowerDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e8bbfb913d0835f99a65b4da76c5a0ed9d38c0cae4822df8120408035f7a3a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-034324",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-034324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-034324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-034324",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-034324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc23acb0b537af0aebf3de869996c6feff6d8cf2704f4cc87c89bd483d293579",
	            "SandboxKey": "/var/run/docker/netns/dc23acb0b537",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-034324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:26:76:7e:eb:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1f92ea7988ef2d7434c33603a3ce0a65b39b302253df0e1958e280e2d6378a1",
	                    "EndpointID": "5e389f7f88c6574cd3e37e0f8c3f21b4137f07178423452985b0b12499f172b6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "NoKubernetes-034324",
	                        "d03759947593"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p NoKubernetes-034324 -n NoKubernetes-034324
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p NoKubernetes-034324 -n NoKubernetes-034324: exit status 6 (309.287161ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:21:32.055667  423921 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-034324" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-034324 logs -n 25
helpers_test.go:260: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p test-preload-237313                                                                                                │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ start   │ -p test-preload-237313 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio     │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:17 UTC │
	│ image   │ test-preload-237313 image list                                                                                        │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ delete  │ -p test-preload-237313                                                                                                │ test-preload-237313         │ jenkins │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ start   │ -p scheduled-stop-891160 --memory=3072 --driver=docker  --container-runtime=crio                                      │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:18 UTC │
	│ stop    │ -p scheduled-stop-891160 --schedule 5m                                                                                │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 5m                                                                                │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 5m                                                                                │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --cancel-scheduled                                                                           │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ stop    │ -p scheduled-stop-891160 --schedule 15s                                                                               │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:19 UTC │
	│ delete  │ -p scheduled-stop-891160                                                                                              │ scheduled-stop-891160       │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │ 09 Oct 25 19:19 UTC │
	│ start   │ -p insufficient-storage-402794 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-402794 │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │                     │
	│ delete  │ -p insufficient-storage-402794                                                                                        │ insufficient-storage-402794 │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │ 09 Oct 25 19:19 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │                     │
	│ start   │ -p NoKubernetes-034324 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:19 UTC │ 09 Oct 25 19:20 UTC │
	│ start   │ -p missing-upgrade-636288 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-636288      │ jenkins │ v1.32.0 │ 09 Oct 25 19:20 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:20 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p missing-upgrade-636288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio              │ missing-upgrade-636288      │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ delete  │ -p NoKubernetes-034324                                                                                                │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-034324         │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:21:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:21:24.689032  422140 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:21:24.689401  422140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:24.689434  422140 out.go:374] Setting ErrFile to fd 2...
	I1009 19:21:24.689455  422140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:24.689748  422140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:21:24.690221  422140 out.go:368] Setting JSON to false
	I1009 19:21:24.691118  422140 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7436,"bootTime":1760030249,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:21:24.691214  422140 start.go:141] virtualization:  
	I1009 19:21:24.694911  422140 out.go:179] * [NoKubernetes-034324] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:21:24.699409  422140 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:21:24.699464  422140 notify.go:220] Checking for updates...
	I1009 19:21:19.953965  421414 delete.go:124] DEMOLISHING missing-upgrade-636288 ...
	I1009 19:21:19.954061  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:19.969316  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	W1009 19:21:19.969380  421414 stop.go:83] unable to get state: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:19.969400  421414 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:19.969860  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:19.985471  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:19.985554  421414 delete.go:82] Unable to get host status for missing-upgrade-636288, assuming it has already been deleted: state: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:19.985621  421414 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-636288
	W1009 19:21:20.009030  421414 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-636288 returned with exit code 1
	I1009 19:21:20.009088  421414 kic.go:371] could not find the container missing-upgrade-636288 to remove it. will try anyways
	I1009 19:21:20.009159  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:20.026389  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	W1009 19:21:20.026460  421414 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:20.026536  421414 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-636288 /bin/bash -c "sudo init 0"
	W1009 19:21:20.041897  421414 cli_runner.go:211] docker exec --privileged -t missing-upgrade-636288 /bin/bash -c "sudo init 0" returned with exit code 1
	I1009 19:21:20.041938  421414 oci.go:659] error shutdown missing-upgrade-636288: docker exec --privileged -t missing-upgrade-636288 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.042162  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:21.058360  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:21.058423  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.058435  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:21.058479  421414 retry.go:31] will retry after 532.20021ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.590895  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:21.611764  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:21.611827  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:21.611841  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:21.611870  421414 retry.go:31] will retry after 941.929332ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:22.554029  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:22.575641  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:22.575697  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:22.575706  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:22.575734  421414 retry.go:31] will retry after 1.053484412s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:23.629422  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:23.645927  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:23.646015  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:23.646030  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:23.646064  421414 retry.go:31] will retry after 1.409603629s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:24.704191  422140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:21:24.707749  422140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:21:24.711025  422140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:21:24.714085  422140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:21:24.717366  422140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:21:24.720990  422140 config.go:182] Loaded profile config "missing-upgrade-636288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1009 19:21:24.721059  422140 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 19:21:24.721153  422140 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:21:24.748194  422140 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:21:24.748316  422140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:24.806071  422140 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:21:24.797067397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:21:24.806207  422140 docker.go:318] overlay module found
	I1009 19:21:24.809174  422140 out.go:179] * Using the docker driver based on user configuration
	I1009 19:21:24.812133  422140 start.go:305] selected driver: docker
	I1009 19:21:24.812159  422140 start.go:925] validating driver "docker" against <nil>
	I1009 19:21:24.812173  422140 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:21:24.812884  422140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:24.865126  422140 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:21:24.856323199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:21:24.865224  422140 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 19:21:24.865296  422140 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:21:24.865517  422140 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:21:24.868595  422140 out.go:179] * Using Docker driver with root privileges
	I1009 19:21:24.871544  422140 cni.go:84] Creating CNI manager for ""
	I1009 19:21:24.871614  422140 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:21:24.871632  422140 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:21:24.871665  422140 start.go:1899] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1009 19:21:24.871724  422140 start.go:349] cluster config:
	{Name:NoKubernetes-034324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-034324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:21:24.874787  422140 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-034324
	I1009 19:21:24.877658  422140 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:21:24.880571  422140 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:21:24.883405  422140 cache.go:58] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1009 19:21:24.883495  422140 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:21:24.883576  422140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/NoKubernetes-034324/config.json ...
	I1009 19:21:24.883609  422140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/NoKubernetes-034324/config.json: {Name:mk0895e18d43675eb938cf9f052976f2da003225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:24.903604  422140 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:21:24.903627  422140 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:21:24.903642  422140 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:21:24.903669  422140 start.go:360] acquireMachinesLock for NoKubernetes-034324: {Name:mkc0a235a952de849342c8d76f6deb47e2084f7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:21:24.903724  422140 start.go:364] duration metric: took 35.241µs to acquireMachinesLock for "NoKubernetes-034324"
	I1009 19:21:24.903747  422140 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-034324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-034324 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:21:24.903807  422140 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:21:24.907075  422140 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:21:24.907310  422140 start.go:159] libmachine.API.Create for "NoKubernetes-034324" (driver="docker")
	I1009 19:21:24.907355  422140 client.go:168] LocalClient.Create starting
	I1009 19:21:24.907421  422140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:21:24.907460  422140 main.go:141] libmachine: Decoding PEM data...
	I1009 19:21:24.907478  422140 main.go:141] libmachine: Parsing certificate...
	I1009 19:21:24.907550  422140 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:21:24.907574  422140 main.go:141] libmachine: Decoding PEM data...
	I1009 19:21:24.907587  422140 main.go:141] libmachine: Parsing certificate...
	I1009 19:21:24.907977  422140 cli_runner.go:164] Run: docker network inspect NoKubernetes-034324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:21:24.923610  422140 cli_runner.go:211] docker network inspect NoKubernetes-034324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:21:24.923698  422140 network_create.go:284] running [docker network inspect NoKubernetes-034324] to gather additional debugging logs...
	I1009 19:21:24.923724  422140 cli_runner.go:164] Run: docker network inspect NoKubernetes-034324
	W1009 19:21:24.939881  422140 cli_runner.go:211] docker network inspect NoKubernetes-034324 returned with exit code 1
	I1009 19:21:24.939911  422140 network_create.go:287] error running [docker network inspect NoKubernetes-034324]: docker network inspect NoKubernetes-034324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-034324 not found
	I1009 19:21:24.939925  422140 network_create.go:289] output of [docker network inspect NoKubernetes-034324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-034324 not found
	
	** /stderr **
	I1009 19:21:24.940035  422140 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:21:24.956881  422140 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:21:24.957276  422140 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:21:24.957530  422140 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:21:24.957949  422140 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a02840}
	I1009 19:21:24.957978  422140 network_create.go:124] attempt to create docker network NoKubernetes-034324 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 19:21:24.958037  422140 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-034324 NoKubernetes-034324
	I1009 19:21:25.023838  422140 network_create.go:108] docker network NoKubernetes-034324 192.168.76.0/24 created
	I1009 19:21:25.023870  422140 kic.go:121] calculated static IP "192.168.76.2" for the "NoKubernetes-034324" container
	I1009 19:21:25.023961  422140 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:21:25.040273  422140 cli_runner.go:164] Run: docker volume create NoKubernetes-034324 --label name.minikube.sigs.k8s.io=NoKubernetes-034324 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:21:25.058537  422140 oci.go:103] Successfully created a docker volume NoKubernetes-034324
	I1009 19:21:25.058621  422140 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-034324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-034324 --entrypoint /usr/bin/test -v NoKubernetes-034324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:21:25.636387  422140 oci.go:107] Successfully prepared a docker volume NoKubernetes-034324
	I1009 19:21:25.636456  422140 preload.go:178] Skipping preload logic due to --no-kubernetes flag
	W1009 19:21:25.636587  422140 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:21:25.636719  422140 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:21:25.692646  422140 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-034324 --name NoKubernetes-034324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-034324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-034324 --network NoKubernetes-034324 --ip 192.168.76.2 --volume NoKubernetes-034324:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:21:26.006079  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Running}}
	I1009 19:21:26.030874  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Status}}
	I1009 19:21:26.058634  422140 cli_runner.go:164] Run: docker exec NoKubernetes-034324 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:21:26.112502  422140 oci.go:144] the created container "NoKubernetes-034324" has a running status.
	I1009 19:21:26.112533  422140 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa...
	I1009 19:21:26.968800  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 19:21:26.968903  422140 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:21:26.998443  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Status}}
	I1009 19:21:27.025155  422140 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:21:27.025183  422140 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-034324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:21:27.079691  422140 cli_runner.go:164] Run: docker container inspect NoKubernetes-034324 --format={{.State.Status}}
	I1009 19:21:27.099903  422140 machine.go:93] provisionDockerMachine start ...
	I1009 19:21:27.100018  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:27.119228  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:27.119572  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:27.119591  422140 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:21:27.278072  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-034324
	
	I1009 19:21:27.278095  422140 ubuntu.go:182] provisioning hostname "NoKubernetes-034324"
	I1009 19:21:27.278223  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:27.297528  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:27.297887  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:27.297902  422140 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-034324 && echo "NoKubernetes-034324" | sudo tee /etc/hostname
	I1009 19:21:27.464698  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-034324
	
	I1009 19:21:27.464847  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:27.484025  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:27.484338  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:27.484361  422140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-034324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-034324/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-034324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:21:27.630279  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:21:27.630308  422140 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:21:27.630332  422140 ubuntu.go:190] setting up certificates
	I1009 19:21:27.630344  422140 provision.go:84] configureAuth start
	I1009 19:21:27.630400  422140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-034324
	I1009 19:21:27.648360  422140 provision.go:143] copyHostCerts
	I1009 19:21:27.648416  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:21:27.648464  422140 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:21:27.648481  422140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:21:27.648565  422140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:21:27.649050  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:21:27.649092  422140 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:21:27.649101  422140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:21:27.649170  422140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:21:27.649285  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:21:27.649319  422140 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:21:27.649337  422140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:21:27.649374  422140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:21:27.649459  422140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-034324 san=[127.0.0.1 192.168.76.2 NoKubernetes-034324 localhost minikube]
	I1009 19:21:28.193277  422140 provision.go:177] copyRemoteCerts
	I1009 19:21:28.193351  422140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:21:28.193395  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.212838  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:28.313838  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:21:28.313898  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:21:28.332774  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:21:28.332840  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 19:21:28.351254  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:21:28.351368  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:21:28.369645  422140 provision.go:87] duration metric: took 739.277748ms to configureAuth
	I1009 19:21:28.369684  422140 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:21:28.369867  422140 config.go:182] Loaded profile config "NoKubernetes-034324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 19:21:28.370026  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.387206  422140 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:28.387513  422140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33355 <nil> <nil>}
	I1009 19:21:28.387534  422140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:21:28.717394  422140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:21:28.717415  422140 machine.go:96] duration metric: took 1.617488804s to provisionDockerMachine
	I1009 19:21:28.717426  422140 client.go:171] duration metric: took 3.81006132s to LocalClient.Create
	I1009 19:21:28.717440  422140 start.go:167] duration metric: took 3.810131491s to libmachine.API.Create "NoKubernetes-034324"
	I1009 19:21:28.717447  422140 start.go:293] postStartSetup for "NoKubernetes-034324" (driver="docker")
	I1009 19:21:28.717457  422140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:21:28.717522  422140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:21:28.717564  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.735482  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:28.838592  422140 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:21:28.842058  422140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:21:28.842085  422140 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:21:28.842097  422140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:21:28.842183  422140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:21:28.842297  422140 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:21:28.842308  422140 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> /etc/ssl/certs/2863092.pem
	I1009 19:21:28.842411  422140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:21:28.849980  422140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:21:28.867298  422140 start.go:296] duration metric: took 149.831302ms for postStartSetup
	I1009 19:21:28.867663  422140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-034324
	I1009 19:21:28.884754  422140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/NoKubernetes-034324/config.json ...
	I1009 19:21:28.885055  422140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:21:28.885115  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:28.903708  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:29.003925  422140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:21:29.009515  422140 start.go:128] duration metric: took 4.105691927s to createHost
	I1009 19:21:29.009539  422140 start.go:83] releasing machines lock for "NoKubernetes-034324", held for 4.105802033s
	I1009 19:21:29.009614  422140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-034324
	I1009 19:21:29.028454  422140 ssh_runner.go:195] Run: cat /version.json
	I1009 19:21:29.028513  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:29.028786  422140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:21:29.028860  422140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-034324
	I1009 19:21:29.054046  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:29.058509  422140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33355 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/NoKubernetes-034324/id_rsa Username:docker}
	I1009 19:21:29.154317  422140 ssh_runner.go:195] Run: systemctl --version
	I1009 19:21:29.262609  422140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:21:29.297399  422140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:21:29.301871  422140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:21:29.301945  422140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:21:29.331894  422140 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:21:29.331959  422140 start.go:495] detecting cgroup driver to use...
	I1009 19:21:29.332007  422140 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:21:29.332099  422140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:21:29.349639  422140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:21:29.362948  422140 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:21:29.363018  422140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:21:29.380504  422140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:21:29.399628  422140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:21:29.518696  422140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:21:29.639708  422140 docker.go:234] disabling docker service ...
	I1009 19:21:29.639818  422140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:21:29.664880  422140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:21:29.678563  422140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:21:25.056666  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:25.075502  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:25.075654  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:25.075732  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:25.075765  421414 retry.go:31] will retry after 2.146973949s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:27.222927  421414 cli_runner.go:164] Run: docker container inspect missing-upgrade-636288 --format={{.State.Status}}
	W1009 19:21:27.239895  421414 cli_runner.go:211] docker container inspect missing-upgrade-636288 --format={{.State.Status}} returned with exit code 1
	I1009 19:21:27.239967  421414 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:27.239978  421414 oci.go:673] temporary error: container missing-upgrade-636288 status is  but expect it to be exited
	I1009 19:21:27.240004  421414 retry.go:31] will retry after 2.640429816s: couldn't verify container is exited. %v: unknown state "missing-upgrade-636288": docker container inspect missing-upgrade-636288 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-636288
	I1009 19:21:29.796700  422140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:21:29.926978  422140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:21:29.940732  422140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:21:29.955056  422140 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1009 19:21:29.955098  422140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 19:21:29.955149  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.964216  422140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:21:29.964280  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.973339  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.982196  422140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:29.991395  422140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:21:29.999656  422140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:21:30.025592  422140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:21:30.045828  422140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:21:30.181984  422140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:21:30.307140  422140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:21:30.307211  422140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:21:30.311066  422140 start.go:563] Will wait 60s for crictl version
	I1009 19:21:30.311130  422140 ssh_runner.go:195] Run: which crictl
	I1009 19:21:30.314703  422140 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:21:30.347134  422140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:21:30.347237  422140 ssh_runner.go:195] Run: crio --version
	I1009 19:21:30.374535  422140 ssh_runner.go:195] Run: crio --version
	I1009 19:21:30.407274  422140 out.go:179] * Preparing CRI-O 1.34.1 ...
	I1009 19:21:30.410469  422140 ssh_runner.go:195] Run: rm -f paused
	I1009 19:21:30.417002  422140 out.go:179] * Done! minikube is ready without Kubernetes!
	I1009 19:21:30.421885  422140 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.292952594Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.293021715Z" level=info msg="No blockio config file specified, blockio not configured"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.293077748Z" level=info msg="RDT not available in the host system"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.293155994Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294008559Z" level=info msg="Conmon does support the --sync option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294109992Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294268017Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.294965052Z" level=info msg="Conmon does support the --sync option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.295059609Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.295299735Z" level=info msg="Updated default CNI network name to "
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.296078215Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/o
ci/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mapp
ings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtim
e.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.9\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    m
etrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins
\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.296753047Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.296923839Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.301431879Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.301935516Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.301958672Z" level=info msg="Starting seccomp notifier watcher"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.30200604Z" level=info msg="Create NRI interface"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302098021Z" level=info msg="built-in NRI default validator is disabled"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302112356Z" level=info msg="runtime interface created"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302123219Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302164927Z" level=info msg="runtime interface starting up..."
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302171433Z" level=info msg="starting plugins..."
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302184718Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 09 19:21:30 NoKubernetes-034324 crio[828]: time="2025-10-09T19:21:30.302250852Z" level=info msg="No systemd watchdog enabled"
	Oct 09 19:21:30 NoKubernetes-034324 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Oct 9 18:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:54] overlayfs: idmapped layers are currently not supported
	[  +3.829072] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:56] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:57] overlayfs: idmapped layers are currently not supported
	[  +4.128207] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:21:32 up  2:04,  0 user,  load average: 2.35, 2.01, 1.99
	Linux NoKubernetes-034324 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p NoKubernetes-034324 -n NoKubernetes-034324
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p NoKubernetes-034324 -n NoKubernetes-034324: exit status 6 (322.984937ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:21:33.012390  424124 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-034324" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "NoKubernetes-034324" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (2.59s)

                                                
                                    
x
+
TestPause/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-446510 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-446510 --alsologtostderr -v=5: exit status 80 (1.974593176s)

                                                
                                                
-- stdout --
	* Pausing node pause-446510 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:26:17.757433  448208 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:26:17.758203  448208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:17.758224  448208 out.go:374] Setting ErrFile to fd 2...
	I1009 19:26:17.758231  448208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:17.758496  448208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:26:17.758777  448208 out.go:368] Setting JSON to false
	I1009 19:26:17.758803  448208 mustload.go:65] Loading cluster: pause-446510
	I1009 19:26:17.759213  448208 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:17.759660  448208 cli_runner.go:164] Run: docker container inspect pause-446510 --format={{.State.Status}}
	I1009 19:26:17.783491  448208 host.go:66] Checking if "pause-446510" exists ...
	I1009 19:26:17.783919  448208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:17.845010  448208 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:26:17.835327439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:17.845656  448208 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-446510 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 19:26:17.849073  448208 out.go:179] * Pausing node pause-446510 ... 
	I1009 19:26:17.851935  448208 host.go:66] Checking if "pause-446510" exists ...
	I1009 19:26:17.852279  448208 ssh_runner.go:195] Run: systemctl --version
	I1009 19:26:17.852331  448208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:26:17.869582  448208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:26:17.972792  448208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:17.985567  448208 pause.go:52] kubelet running: true
	I1009 19:26:17.985636  448208 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:26:18.221684  448208 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:26:18.221781  448208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:26:18.292997  448208 cri.go:89] found id: "4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247"
	I1009 19:26:18.293023  448208 cri.go:89] found id: "a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d"
	I1009 19:26:18.293034  448208 cri.go:89] found id: "7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5"
	I1009 19:26:18.293038  448208 cri.go:89] found id: "156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5"
	I1009 19:26:18.293041  448208 cri.go:89] found id: "2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9"
	I1009 19:26:18.293044  448208 cri.go:89] found id: "add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64"
	I1009 19:26:18.293047  448208 cri.go:89] found id: "6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c"
	I1009 19:26:18.293051  448208 cri.go:89] found id: "535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	I1009 19:26:18.293054  448208 cri.go:89] found id: "7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139"
	I1009 19:26:18.293060  448208 cri.go:89] found id: "ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261"
	I1009 19:26:18.293063  448208 cri.go:89] found id: "adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	I1009 19:26:18.293066  448208 cri.go:89] found id: "2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548"
	I1009 19:26:18.293069  448208 cri.go:89] found id: "f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158"
	I1009 19:26:18.293072  448208 cri.go:89] found id: "77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f"
	I1009 19:26:18.293075  448208 cri.go:89] found id: ""
	I1009 19:26:18.293125  448208 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:26:18.304739  448208 retry.go:31] will retry after 357.026922ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:26:18Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:26:18.662302  448208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:18.675630  448208 pause.go:52] kubelet running: false
	I1009 19:26:18.675693  448208 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:26:18.813322  448208 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:26:18.813448  448208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:26:18.880485  448208 cri.go:89] found id: "4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247"
	I1009 19:26:18.880507  448208 cri.go:89] found id: "a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d"
	I1009 19:26:18.880515  448208 cri.go:89] found id: "7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5"
	I1009 19:26:18.880519  448208 cri.go:89] found id: "156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5"
	I1009 19:26:18.880522  448208 cri.go:89] found id: "2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9"
	I1009 19:26:18.880525  448208 cri.go:89] found id: "add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64"
	I1009 19:26:18.880528  448208 cri.go:89] found id: "6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c"
	I1009 19:26:18.880552  448208 cri.go:89] found id: "535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	I1009 19:26:18.880560  448208 cri.go:89] found id: "7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139"
	I1009 19:26:18.880567  448208 cri.go:89] found id: "ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261"
	I1009 19:26:18.880574  448208 cri.go:89] found id: "adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	I1009 19:26:18.880578  448208 cri.go:89] found id: "2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548"
	I1009 19:26:18.880581  448208 cri.go:89] found id: "f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158"
	I1009 19:26:18.880586  448208 cri.go:89] found id: "77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f"
	I1009 19:26:18.880589  448208 cri.go:89] found id: ""
	I1009 19:26:18.880659  448208 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:26:18.891763  448208 retry.go:31] will retry after 526.824224ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:26:18Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:26:19.419572  448208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:19.432866  448208 pause.go:52] kubelet running: false
	I1009 19:26:19.432955  448208 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:26:19.571883  448208 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:26:19.571997  448208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:26:19.650439  448208 cri.go:89] found id: "4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247"
	I1009 19:26:19.650462  448208 cri.go:89] found id: "a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d"
	I1009 19:26:19.650469  448208 cri.go:89] found id: "7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5"
	I1009 19:26:19.650473  448208 cri.go:89] found id: "156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5"
	I1009 19:26:19.650476  448208 cri.go:89] found id: "2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9"
	I1009 19:26:19.650480  448208 cri.go:89] found id: "add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64"
	I1009 19:26:19.650483  448208 cri.go:89] found id: "6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c"
	I1009 19:26:19.650487  448208 cri.go:89] found id: "535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	I1009 19:26:19.650491  448208 cri.go:89] found id: "7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139"
	I1009 19:26:19.650498  448208 cri.go:89] found id: "ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261"
	I1009 19:26:19.650502  448208 cri.go:89] found id: "adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	I1009 19:26:19.650505  448208 cri.go:89] found id: "2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548"
	I1009 19:26:19.650508  448208 cri.go:89] found id: "f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158"
	I1009 19:26:19.650522  448208 cri.go:89] found id: "77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f"
	I1009 19:26:19.650530  448208 cri.go:89] found id: ""
	I1009 19:26:19.650584  448208 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:26:19.665564  448208 out.go:203] 
	W1009 19:26:19.668396  448208 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:26:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:26:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:26:19.668420  448208 out.go:285] * 
	* 
	W1009 19:26:19.675594  448208 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:26:19.678425  448208 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-446510 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-446510
helpers_test.go:243: (dbg) docker inspect pause-446510:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c",
	        "Created": "2025-10-09T19:24:33.671116015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 443299,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:24:33.766841099Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/hosts",
	        "LogPath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c-json.log",
	        "Name": "/pause-446510",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-446510:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-446510",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c",
	                "LowerDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-446510",
	                "Source": "/var/lib/docker/volumes/pause-446510/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-446510",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-446510",
	                "name.minikube.sigs.k8s.io": "pause-446510",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9df2b503fe37b3f1b5846b7d56347188d907f9af324cbab8803642827de80f31",
	            "SandboxKey": "/var/run/docker/netns/9df2b503fe37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-446510": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:0e:45:87:7f:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "51ddeaefb4b54c95c4072f43f62b08889003e6e502e326c9286008b4f2259340",
	                    "EndpointID": "750840acaff65f018f63c17a746b93c04ca3b4a84fadf420f7a65a476208d6fd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-446510",
	                        "2254f2d1ea8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-446510 -n pause-446510
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-446510 -n pause-446510: exit status 2 (348.930475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-446510 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-446510 logs -n 25: (1.373187225s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-034324                                                                                                                   │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ -p NoKubernetes-034324 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ stop    │ -p NoKubernetes-034324                                                                                                                   │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ -p NoKubernetes-034324 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ delete  │ -p NoKubernetes-034324                                                                                                                   │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:22 UTC │
	│ delete  │ -p missing-upgrade-636288                                                                                                                │ missing-upgrade-636288    │ jenkins │ v1.37.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:22 UTC │
	│ start   │ -p stopped-upgrade-702726 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-702726    │ jenkins │ v1.32.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:22 UTC │
	│ stop    │ -p kubernetes-upgrade-055159                                                                                                             │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:22 UTC │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:23 UTC │
	│ stop    │ stopped-upgrade-702726 stop                                                                                                              │ stopped-upgrade-702726    │ jenkins │ v1.32.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:23 UTC │
	│ start   │ -p stopped-upgrade-702726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-702726    │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p stopped-upgrade-702726                                                                                                                │ stopped-upgrade-702726    │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ start   │ -p running-upgrade-820547 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-820547    │ jenkins │ v1.32.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p running-upgrade-820547 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-820547    │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:24 UTC │
	│ delete  │ -p running-upgrade-820547                                                                                                                │ running-upgrade-820547    │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:24 UTC │
	│ delete  │ -p kubernetes-upgrade-055159                                                                                                             │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p pause-446510 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-446510              │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-flag-476949 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │                     │
	│ start   │ -p pause-446510 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-446510              │ jenkins │ v1.37.0 │ 09 Oct 25 19:25 UTC │ 09 Oct 25 19:26 UTC │
	│ pause   │ -p pause-446510 --alsologtostderr -v=5                                                                                                   │ pause-446510              │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:25:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:25:53.161709  447071 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:25:53.161822  447071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:25:53.161832  447071 out.go:374] Setting ErrFile to fd 2...
	I1009 19:25:53.161838  447071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:25:53.162179  447071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:25:53.162537  447071 out.go:368] Setting JSON to false
	I1009 19:25:53.163716  447071 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7705,"bootTime":1760030249,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:25:53.163788  447071 start.go:141] virtualization:  
	I1009 19:25:53.167002  447071 out.go:179] * [pause-446510] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:25:53.170887  447071 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:25:53.170944  447071 notify.go:220] Checking for updates...
	I1009 19:25:53.182417  447071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:25:53.185336  447071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:25:53.188224  447071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:25:53.191077  447071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:25:53.194012  447071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:25:53.197220  447071 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:25:53.197863  447071 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:25:53.225188  447071 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:25:53.225296  447071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:25:53.293137  447071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:25:53.283678598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:25:53.293245  447071 docker.go:318] overlay module found
	I1009 19:25:53.296415  447071 out.go:179] * Using the docker driver based on existing profile
	I1009 19:25:53.299183  447071 start.go:305] selected driver: docker
	I1009 19:25:53.299206  447071 start.go:925] validating driver "docker" against &{Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:25:53.299353  447071 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:25:53.299482  447071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:25:53.354922  447071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:25:53.345894849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:25:53.355313  447071 cni.go:84] Creating CNI manager for ""
	I1009 19:25:53.355379  447071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:25:53.355423  447071 start.go:349] cluster config:
	{Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:25:53.358686  447071 out.go:179] * Starting "pause-446510" primary control-plane node in "pause-446510" cluster
	I1009 19:25:53.361515  447071 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:25:53.364445  447071 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:25:53.367270  447071 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:25:53.367320  447071 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:25:53.367336  447071 cache.go:64] Caching tarball of preloaded images
	I1009 19:25:53.367347  447071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:25:53.367448  447071 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:25:53.367458  447071 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:25:53.367596  447071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/config.json ...
	I1009 19:25:53.387131  447071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:25:53.387155  447071 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:25:53.387174  447071 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:25:53.387197  447071 start.go:360] acquireMachinesLock for pause-446510: {Name:mk846e63e7d5721a4c09542a50933b19f8fd3ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:25:53.387261  447071 start.go:364] duration metric: took 36.973µs to acquireMachinesLock for "pause-446510"
	I1009 19:25:53.387285  447071 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:25:53.387291  447071 fix.go:54] fixHost starting: 
	I1009 19:25:53.387559  447071 cli_runner.go:164] Run: docker container inspect pause-446510 --format={{.State.Status}}
	I1009 19:25:53.404435  447071 fix.go:112] recreateIfNeeded on pause-446510: state=Running err=<nil>
	W1009 19:25:53.404474  447071 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:25:53.407662  447071 out.go:252] * Updating the running docker "pause-446510" container ...
	I1009 19:25:53.407700  447071 machine.go:93] provisionDockerMachine start ...
	I1009 19:25:53.407788  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:53.424924  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:53.425257  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:53.425267  447071 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:25:53.573822  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-446510
	
	I1009 19:25:53.573854  447071 ubuntu.go:182] provisioning hostname "pause-446510"
	I1009 19:25:53.573918  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:53.593505  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:53.593840  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:53.593858  447071 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-446510 && echo "pause-446510" | sudo tee /etc/hostname
	I1009 19:25:53.752053  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-446510
	
	I1009 19:25:53.752130  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:53.770615  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:53.770987  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:53.771010  447071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-446510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-446510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-446510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:25:53.914543  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:25:53.914567  447071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:25:53.914599  447071 ubuntu.go:190] setting up certificates
	I1009 19:25:53.914613  447071 provision.go:84] configureAuth start
	I1009 19:25:53.914673  447071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-446510
	I1009 19:25:53.933117  447071 provision.go:143] copyHostCerts
	I1009 19:25:53.933185  447071 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:25:53.933205  447071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:25:53.933283  447071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:25:53.933386  447071 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:25:53.933401  447071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:25:53.933429  447071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:25:53.933493  447071 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:25:53.933504  447071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:25:53.933531  447071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:25:53.933584  447071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.pause-446510 san=[127.0.0.1 192.168.85.2 localhost minikube pause-446510]
	I1009 19:25:54.090693  447071 provision.go:177] copyRemoteCerts
	I1009 19:25:54.090765  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:25:54.090819  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:54.110790  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:54.218506  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:25:54.238171  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:25:54.261309  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:25:54.284111  447071 provision.go:87] duration metric: took 369.483586ms to configureAuth
	I1009 19:25:54.284139  447071 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:25:54.284361  447071 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:25:54.284474  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:54.302239  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:54.302552  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:54.302572  447071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:25:59.619589  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:25:59.619613  447071 machine.go:96] duration metric: took 6.211904521s to provisionDockerMachine
	I1009 19:25:59.619624  447071 start.go:293] postStartSetup for "pause-446510" (driver="docker")
	I1009 19:25:59.619636  447071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:25:59.619711  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:25:59.619759  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.638232  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:59.742091  447071 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:25:59.745899  447071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:25:59.745928  447071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:25:59.745940  447071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:25:59.745995  447071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:25:59.746112  447071 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:25:59.746243  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:25:59.753872  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:25:59.771882  447071 start.go:296] duration metric: took 152.24112ms for postStartSetup
	I1009 19:25:59.771959  447071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:25:59.771998  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.789834  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:59.887528  447071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:25:59.892839  447071 fix.go:56] duration metric: took 6.505541272s for fixHost
	I1009 19:25:59.892866  447071 start.go:83] releasing machines lock for "pause-446510", held for 6.505591512s
	I1009 19:25:59.892950  447071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-446510
	I1009 19:25:59.911296  447071 ssh_runner.go:195] Run: cat /version.json
	I1009 19:25:59.911354  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.911354  447071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:25:59.911429  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.933567  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:59.935135  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:26:00.067219  447071 ssh_runner.go:195] Run: systemctl --version
	I1009 19:26:00.261370  447071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:26:00.350451  447071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:26:00.356449  447071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:26:00.356523  447071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:26:00.374273  447071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:26:00.374300  447071 start.go:495] detecting cgroup driver to use...
	I1009 19:26:00.374340  447071 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:26:00.374394  447071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:26:00.394759  447071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:26:00.414552  447071 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:26:00.414634  447071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:26:00.433258  447071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:26:00.451066  447071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:26:00.603716  447071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:26:00.740519  447071 docker.go:234] disabling docker service ...
	I1009 19:26:00.740585  447071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:26:00.757054  447071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:26:00.770452  447071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:26:00.896594  447071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:26:01.028648  447071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:26:01.043081  447071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:26:01.057408  447071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:26:01.057496  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.067589  447071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:26:01.067691  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.077069  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.086427  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.095689  447071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:26:01.104072  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.113722  447071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.122719  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.132163  447071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:26:01.140322  447071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:26:01.149042  447071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:01.285149  447071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:26:01.462654  447071 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:26:01.462723  447071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:26:01.466634  447071 start.go:563] Will wait 60s for crictl version
	I1009 19:26:01.466752  447071 ssh_runner.go:195] Run: which crictl
	I1009 19:26:01.470356  447071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:26:01.497419  447071 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:26:01.497504  447071 ssh_runner.go:195] Run: crio --version
	I1009 19:26:01.530007  447071 ssh_runner.go:195] Run: crio --version
	I1009 19:26:01.563093  447071 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:26:01.565978  447071 cli_runner.go:164] Run: docker network inspect pause-446510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:26:01.582660  447071 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:26:01.586664  447071 kubeadm.go:883] updating cluster {Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:26:01.586833  447071 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:01.586898  447071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:01.624633  447071 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:01.624658  447071 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:26:01.624715  447071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:01.651204  447071 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:01.651231  447071 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:26:01.651240  447071 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:26:01.651344  447071 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-446510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:26:01.651428  447071 ssh_runner.go:195] Run: crio config
	I1009 19:26:01.717796  447071 cni.go:84] Creating CNI manager for ""
	I1009 19:26:01.717830  447071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:26:01.717857  447071 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:26:01.717895  447071 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-446510 NodeName:pause-446510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:26:01.718045  447071 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-446510"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:26:01.718158  447071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:26:01.726213  447071 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:26:01.726288  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:26:01.733971  447071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1009 19:26:01.746644  447071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:26:01.760838  447071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1009 19:26:01.773590  447071 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:26:01.777331  447071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:01.913115  447071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:26:01.926815  447071 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510 for IP: 192.168.85.2
	I1009 19:26:01.926882  447071 certs.go:195] generating shared ca certs ...
	I1009 19:26:01.926914  447071 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:01.927085  447071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:26:01.927153  447071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:26:01.927175  447071 certs.go:257] generating profile certs ...
	I1009 19:26:01.927298  447071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.key
	I1009 19:26:01.927411  447071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/apiserver.key.2b10b5e4
	I1009 19:26:01.927485  447071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/proxy-client.key
	I1009 19:26:01.927637  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:26:01.927703  447071 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:26:01.927747  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:26:01.927798  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:26:01.927861  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:26:01.927912  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:26:01.928017  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:26:01.928724  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:26:01.949867  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:26:01.969547  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:26:01.987561  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:26:02.006902  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 19:26:02.027586  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:26:02.045720  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:26:02.064756  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:26:02.082661  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:26:02.100962  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:26:02.119542  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:26:02.137643  447071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:26:02.150778  447071 ssh_runner.go:195] Run: openssl version
	I1009 19:26:02.157264  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:26:02.166120  447071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:26:02.169939  447071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:26:02.170004  447071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:26:02.212211  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:26:02.220227  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:26:02.229107  447071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:02.233244  447071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:02.233316  447071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:02.275321  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:26:02.283546  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:26:02.293364  447071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:26:02.304063  447071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:26:02.304135  447071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:26:02.348069  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:26:02.356387  447071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:26:02.365176  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:26:02.428917  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:26:02.503284  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:26:02.651018  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:26:02.739099  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:26:02.803072  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:26:02.870846  447071 kubeadm.go:400] StartCluster: {Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:26:02.870969  447071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:26:02.871028  447071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:26:02.908452  447071 cri.go:89] found id: "4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247"
	I1009 19:26:02.908474  447071 cri.go:89] found id: "a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d"
	I1009 19:26:02.908479  447071 cri.go:89] found id: "7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5"
	I1009 19:26:02.908484  447071 cri.go:89] found id: "156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5"
	I1009 19:26:02.908488  447071 cri.go:89] found id: "2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9"
	I1009 19:26:02.908498  447071 cri.go:89] found id: "add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64"
	I1009 19:26:02.908502  447071 cri.go:89] found id: "6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c"
	I1009 19:26:02.908513  447071 cri.go:89] found id: "535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	I1009 19:26:02.908517  447071 cri.go:89] found id: "7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139"
	I1009 19:26:02.908523  447071 cri.go:89] found id: "ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261"
	I1009 19:26:02.908527  447071 cri.go:89] found id: "adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	I1009 19:26:02.908531  447071 cri.go:89] found id: "2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548"
	I1009 19:26:02.908534  447071 cri.go:89] found id: "f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158"
	I1009 19:26:02.908537  447071 cri.go:89] found id: "77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f"
	I1009 19:26:02.908540  447071 cri.go:89] found id: ""
	I1009 19:26:02.908588  447071 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:26:02.927031  447071 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:26:02Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:26:02.927111  447071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:26:02.945214  447071 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:26:02.945234  447071 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:26:02.945286  447071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:26:02.953390  447071 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:26:02.953906  447071 kubeconfig.go:125] found "pause-446510" server: "https://192.168.85.2:8443"
	I1009 19:26:02.954470  447071 kapi.go:59] client config for pause-446510: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.key", CAFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:26:02.955005  447071 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:26:02.955027  447071 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:26:02.955034  447071 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:26:02.955042  447071 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:26:02.955046  447071 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:26:02.955315  447071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:26:02.966628  447071 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:26:02.966663  447071 kubeadm.go:601] duration metric: took 21.421929ms to restartPrimaryControlPlane
	I1009 19:26:02.966672  447071 kubeadm.go:402] duration metric: took 95.836296ms to StartCluster
	I1009 19:26:02.966687  447071 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:02.966745  447071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:26:02.967420  447071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:02.967661  447071 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:26:02.968004  447071 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:02.968052  447071 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:26:02.971525  447071 out.go:179] * Verifying Kubernetes components...
	I1009 19:26:02.971604  447071 out.go:179] * Enabled addons: 
	I1009 19:26:02.975449  447071 addons.go:514] duration metric: took 7.381549ms for enable addons: enabled=[]
	I1009 19:26:02.975578  447071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:03.223761  447071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:26:03.244580  447071 node_ready.go:35] waiting up to 6m0s for node "pause-446510" to be "Ready" ...
	I1009 19:26:06.944456  447071 node_ready.go:49] node "pause-446510" is "Ready"
	I1009 19:26:06.944490  447071 node_ready.go:38] duration metric: took 3.699882871s for node "pause-446510" to be "Ready" ...
	I1009 19:26:06.944505  447071 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:26:06.944571  447071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:26:06.959256  447071 api_server.go:72] duration metric: took 3.991559226s to wait for apiserver process to appear ...
	I1009 19:26:06.959293  447071 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:26:06.959313  447071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:26:07.017587  447071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:26:07.017684  447071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:26:07.460356  447071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:26:07.469590  447071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:26:07.469622  447071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:26:07.960237  447071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:26:07.977402  447071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:26:07.978837  447071 api_server.go:141] control plane version: v1.34.1
	I1009 19:26:07.978909  447071 api_server.go:131] duration metric: took 1.019607422s to wait for apiserver health ...
	I1009 19:26:07.978946  447071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:26:07.985182  447071 system_pods.go:59] 7 kube-system pods found
	I1009 19:26:07.985275  447071 system_pods.go:61] "coredns-66bc5c9577-4766q" [8f8889d0-e1aa-4b5b-9d6d-863d79f4f451] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:26:07.985306  447071 system_pods.go:61] "etcd-pause-446510" [729678e3-dff0-4a70-9a51-dd43cd08b28f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:26:07.985344  447071 system_pods.go:61] "kindnet-jmm4z" [25aa7840-d779-4e2c-9dc2-ce45b5a58dab] Running
	I1009 19:26:07.985374  447071 system_pods.go:61] "kube-apiserver-pause-446510" [c3a97e07-c22c-40ba-a88f-543cec6496ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:26:07.985422  447071 system_pods.go:61] "kube-controller-manager-pause-446510" [93d3e3bd-7151-490d-82cf-035e8e9022d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:26:07.985448  447071 system_pods.go:61] "kube-proxy-clcz6" [738e376b-82bd-49dd-9c74-adde76b723b0] Running
	I1009 19:26:07.985471  447071 system_pods.go:61] "kube-scheduler-pause-446510" [31319d77-17ab-40ee-aa43-72a6d6f1b565] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:26:07.985512  447071 system_pods.go:74] duration metric: took 6.541997ms to wait for pod list to return data ...
	I1009 19:26:07.985540  447071 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:26:08.045964  447071 default_sa.go:45] found service account: "default"
	I1009 19:26:08.046040  447071 default_sa.go:55] duration metric: took 60.480178ms for default service account to be created ...
	I1009 19:26:08.046064  447071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:26:08.051111  447071 system_pods.go:86] 7 kube-system pods found
	I1009 19:26:08.051197  447071 system_pods.go:89] "coredns-66bc5c9577-4766q" [8f8889d0-e1aa-4b5b-9d6d-863d79f4f451] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:26:08.051226  447071 system_pods.go:89] "etcd-pause-446510" [729678e3-dff0-4a70-9a51-dd43cd08b28f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:26:08.051267  447071 system_pods.go:89] "kindnet-jmm4z" [25aa7840-d779-4e2c-9dc2-ce45b5a58dab] Running
	I1009 19:26:08.051302  447071 system_pods.go:89] "kube-apiserver-pause-446510" [c3a97e07-c22c-40ba-a88f-543cec6496ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:26:08.051328  447071 system_pods.go:89] "kube-controller-manager-pause-446510" [93d3e3bd-7151-490d-82cf-035e8e9022d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:26:08.051374  447071 system_pods.go:89] "kube-proxy-clcz6" [738e376b-82bd-49dd-9c74-adde76b723b0] Running
	I1009 19:26:08.051398  447071 system_pods.go:89] "kube-scheduler-pause-446510" [31319d77-17ab-40ee-aa43-72a6d6f1b565] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:26:08.051437  447071 system_pods.go:126] duration metric: took 5.351359ms to wait for k8s-apps to be running ...
	I1009 19:26:08.051468  447071 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:26:08.051580  447071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:08.067688  447071 system_svc.go:56] duration metric: took 16.212029ms WaitForService to wait for kubelet
	I1009 19:26:08.067774  447071 kubeadm.go:586] duration metric: took 5.100079855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:26:08.067810  447071 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:26:08.071994  447071 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:26:08.072092  447071 node_conditions.go:123] node cpu capacity is 2
	I1009 19:26:08.072122  447071 node_conditions.go:105] duration metric: took 4.290619ms to run NodePressure ...
	I1009 19:26:08.072170  447071 start.go:241] waiting for startup goroutines ...
	I1009 19:26:08.072184  447071 start.go:246] waiting for cluster config update ...
	I1009 19:26:08.072194  447071 start.go:255] writing updated cluster config ...
	I1009 19:26:08.072536  447071 ssh_runner.go:195] Run: rm -f paused
	I1009 19:26:08.076921  447071 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:26:08.077547  447071 kapi.go:59] client config for pause-446510: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.key", CAFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:26:08.080734  447071 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4766q" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:26:10.086673  447071 pod_ready.go:104] pod "coredns-66bc5c9577-4766q" is not "Ready", error: <nil>
	W1009 19:26:12.585896  447071 pod_ready.go:104] pod "coredns-66bc5c9577-4766q" is not "Ready", error: <nil>
	I1009 19:26:14.586680  447071 pod_ready.go:94] pod "coredns-66bc5c9577-4766q" is "Ready"
	I1009 19:26:14.586711  447071 pod_ready.go:86] duration metric: took 6.505937028s for pod "coredns-66bc5c9577-4766q" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:14.589948  447071 pod_ready.go:83] waiting for pod "etcd-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.599024  447071 pod_ready.go:94] pod "etcd-pause-446510" is "Ready"
	I1009 19:26:16.599096  447071 pod_ready.go:86] duration metric: took 2.009118091s for pod "etcd-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.602797  447071 pod_ready.go:83] waiting for pod "kube-apiserver-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.611325  447071 pod_ready.go:94] pod "kube-apiserver-pause-446510" is "Ready"
	I1009 19:26:16.611410  447071 pod_ready.go:86] duration metric: took 8.543411ms for pod "kube-apiserver-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.615045  447071 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.624181  447071 pod_ready.go:94] pod "kube-controller-manager-pause-446510" is "Ready"
	I1009 19:26:16.624258  447071 pod_ready.go:86] duration metric: took 9.146119ms for pod "kube-controller-manager-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.629496  447071 pod_ready.go:83] waiting for pod "kube-proxy-clcz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.984147  447071 pod_ready.go:94] pod "kube-proxy-clcz6" is "Ready"
	I1009 19:26:16.984174  447071 pod_ready.go:86] duration metric: took 354.61207ms for pod "kube-proxy-clcz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:17.186050  447071 pod_ready.go:83] waiting for pod "kube-scheduler-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:17.584704  447071 pod_ready.go:94] pod "kube-scheduler-pause-446510" is "Ready"
	I1009 19:26:17.584784  447071 pod_ready.go:86] duration metric: took 398.705957ms for pod "kube-scheduler-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:17.584823  447071 pod_ready.go:40] duration metric: took 9.507856963s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:26:17.671192  447071 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:26:17.675482  447071 out.go:179] * Done! kubectl is now configured to use "pause-446510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.557333018Z" level=info msg="Created container a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d: kube-system/kube-scheduler-pause-446510/kube-scheduler" id=687d6956-7f13-4b70-8fed-bb9f317ae3cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.558694891Z" level=info msg="Starting container: a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d" id=46243551-d12a-4d38-b213-65737b3824d4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.566664608Z" level=info msg="Started container" PID=2291 containerID=2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9 description=kube-system/kube-proxy-clcz6/kube-proxy id=39ddc8a8-3ef8-443b-a9cb-8739b002d15a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8d4f950a27fb273ee38019c7deda49b55437b141a0b7df8ca9d8828ca164fae
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.581539954Z" level=info msg="Created container 156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5: kube-system/kindnet-jmm4z/kindnet-cni" id=6f755dff-e6e5-43fc-90e8-17cb9241bc74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.585225231Z" level=info msg="Starting container: 156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5" id=580f0df8-e4c2-45ef-9d56-eef77f40c3da name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.61220905Z" level=info msg="Started container" PID=2315 containerID=a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d description=kube-system/kube-scheduler-pause-446510/kube-scheduler id=46243551-d12a-4d38-b213-65737b3824d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bd4c37cb098a361d5e58bf64b2b8b60fef80c5e929c48e32ae457dec50f54fe
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.632517701Z" level=info msg="Started container" PID=2314 containerID=156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5 description=kube-system/kindnet-jmm4z/kindnet-cni id=580f0df8-e4c2-45ef-9d56-eef77f40c3da name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bc6ca5eaa699340634edc38c40860af490ab90e1e398eefd48bda16b303ac4b
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.678747223Z" level=info msg="Created container 4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247: kube-system/kube-controller-manager-pause-446510/kube-controller-manager" id=ee02889d-040e-4242-962a-2330122d9967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.679409197Z" level=info msg="Starting container: 4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247" id=b5cf1474-830f-4410-9fa9-98dee9ade4c3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.684065195Z" level=info msg="Started container" PID=2335 containerID=4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247 description=kube-system/kube-controller-manager-pause-446510/kube-controller-manager id=b5cf1474-830f-4410-9fa9-98dee9ade4c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e58ccfa9d182fd450fa34026c190674d8e600ab8dedaf18ea1aca78cb3b72138
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.03434818Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.038163535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.038200647Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.038223121Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.041253727Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.041289477Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.041312887Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.044541936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.044584817Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.044609761Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.047892283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.047927205Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.047951533Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.051155581Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.051192973Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4c2452d6f36d6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago       Running             kube-controller-manager   1                   e58ccfa9d182f       kube-controller-manager-pause-446510   kube-system
	a6f5fd6f4bea9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago       Running             kube-scheduler            1                   3bd4c37cb098a       kube-scheduler-pause-446510            kube-system
	7c1221bd7d59b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   18 seconds ago       Running             coredns                   1                   bf9887fd41187       coredns-66bc5c9577-4766q               kube-system
	156adacbda5a7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   18 seconds ago       Running             kindnet-cni               1                   6bc6ca5eaa699       kindnet-jmm4z                          kube-system
	2bc102c3bd634       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   18 seconds ago       Running             kube-proxy                1                   f8d4f950a27fb       kube-proxy-clcz6                       kube-system
	add9e3f1f95ea       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago       Running             etcd                      1                   000d837fe2be5       etcd-pause-446510                      kube-system
	6325f53ac167e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago       Running             kube-apiserver            1                   407a91e13c50c       kube-apiserver-pause-446510            kube-system
	535e956a01dd2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   30 seconds ago       Exited              coredns                   0                   bf9887fd41187       coredns-66bc5c9577-4766q               kube-system
	7de28c39361bb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   f8d4f950a27fb       kube-proxy-clcz6                       kube-system
	ccdea48a947e3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   6bc6ca5eaa699       kindnet-jmm4z                          kube-system
	adff605a7c65b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   e58ccfa9d182f       kube-controller-manager-pause-446510   kube-system
	2957ca2678142       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   3bd4c37cb098a       kube-scheduler-pause-446510            kube-system
	f0c59f52cc589       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   000d837fe2be5       etcd-pause-446510                      kube-system
	77d00a2936d94       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   407a91e13c50c       kube-apiserver-pause-446510            kube-system
	
	
	==> coredns [535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48710 - 7967 "HINFO IN 4988741048091739230.5390373744845976817. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014183299s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55430 - 8399 "HINFO IN 1930796750062550998.7867124658167409791. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035398695s
	
	
	==> describe nodes <==
	Name:               pause-446510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-446510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=pause-446510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_25_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:25:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-446510
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:24:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:24:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:24:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-446510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7335011f64e0455987ebbdbb40738a9d
	  System UUID:                8a10f5e3-c4bc-4ded-a988-ed05ff3fb3ee
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4766q                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     71s
	  kube-system                 etcd-pause-446510                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         76s
	  kube-system                 kindnet-jmm4z                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      71s
	  kube-system                 kube-apiserver-pause-446510             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-pause-446510    200m (10%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-proxy-clcz6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-pause-446510             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 70s   kube-proxy       
	  Normal   Starting                 13s   kube-proxy       
	  Normal   Starting                 76s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 76s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  76s   kubelet          Node pause-446510 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s   kubelet          Node pause-446510 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s   kubelet          Node pause-446510 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           72s   node-controller  Node pause-446510 event: Registered Node pause-446510 in Controller
	  Normal   NodeReady                30s   kubelet          Node pause-446510 status is now: NodeReady
	  Normal   RegisteredNode           11s   node-controller  Node pause-446510 event: Registered Node pause-446510 in Controller
	
	
	==> dmesg <==
	[Oct 9 18:56] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:57] overlayfs: idmapped layers are currently not supported
	[  +4.128207] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64] <==
	{"level":"warn","ts":"2025-10-09T19:26:05.293209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.301842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.328543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.346851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.366954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.389139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.412482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.428191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.450689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.462701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.509743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.512350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.528374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.544435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.563118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.624753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.670228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.671436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.695838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.721767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.763104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.819028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.846213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.870382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.929059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	
	
	==> etcd [f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158] <==
	{"level":"warn","ts":"2025-10-09T19:25:00.070180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.083435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.117778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.150867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.178829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.195757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.333120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38120","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T19:25:54.478379Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-09T19:25:54.478450Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-446510","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-09T19:25:54.479071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T19:25:54.619349Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619517Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619566Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T19:25:54.619579Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-09T19:25:54.619538Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619685Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-09T19:25:54.619768Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-10-09T19:25:54.619780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:25:54.619813Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-09T19:25:54.619823Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-09T19:25:54.623426Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-09T19:25:54.623505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:25:54.623540Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:25:54.623547Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-446510","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 19:26:20 up  2:08,  0 user,  load average: 2.47, 3.09, 2.52
	Linux pause-446510 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5] <==
	I1009 19:26:02.737157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:26:02.819571       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:26:02.819707       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:26:02.819720       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:26:02.819731       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:26:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:26:03.034121       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:26:03.034263       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:26:03.045139       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:26:03.046196       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 19:26:07.045982       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:26:07.046102       1 metrics.go:72] Registering metrics
	I1009 19:26:07.046208       1 controller.go:711] "Syncing nftables rules"
	I1009 19:26:13.033835       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:26:13.033986       1 main.go:301] handling current node
	
	
	==> kindnet [ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261] <==
	I1009 19:25:09.916694       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:25:09.916939       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:25:09.917094       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:25:09.917112       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:25:09.917122       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:25:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:25:10.120471       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:25:10.121326       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:25:10.121449       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:25:10.121608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:25:40.120559       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:25:40.121564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:25:40.121574       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:25:40.121676       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1009 19:25:41.821979       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:25:41.822013       1 metrics.go:72] Registering metrics
	I1009 19:25:41.822094       1 controller.go:711] "Syncing nftables rules"
	I1009 19:25:50.126927       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:25:50.126971       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c] <==
	I1009 19:26:06.926982       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:26:06.962867       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:26:06.965113       1 policy_source.go:240] refreshing policies
	I1009 19:26:06.965281       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:26:06.965395       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:26:06.965445       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:26:06.966065       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:26:06.966949       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:26:06.970536       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:26:06.971124       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:26:06.976423       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:26:06.992941       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:26:07.010514       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:26:07.026284       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:26:07.026813       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:26:07.029428       1 cache.go:39] Caches are synced for autoregister controller
	E1009 19:26:07.047101       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:26:07.057594       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 19:26:07.062649       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:26:07.661047       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:26:07.961818       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:26:09.366773       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:26:09.451515       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:26:09.603001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:26:09.754309       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f] <==
	W1009 19:25:54.489586       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489644       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489703       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489760       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489830       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489904       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489960       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490006       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490055       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490107       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490445       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490494       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490546       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490604       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490647       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490689       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491173       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491281       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491371       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491455       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491559       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491613       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491684       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491724       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491743       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247] <==
	I1009 19:26:09.367972       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:26:09.370210       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:26:09.374439       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:26:09.382903       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:26:09.383122       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:26:09.388224       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:26:09.388386       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:26:09.395065       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:26:09.395175       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:26:09.395410       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:26:09.395495       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:26:09.395544       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:26:09.398217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:26:09.398348       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:26:09.398362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 19:26:09.398382       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:26:09.398401       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:26:09.398409       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:26:09.398416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:26:09.408733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:26:09.412862       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:26:09.417104       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:26:09.420428       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:26:09.426745       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:26:09.429995       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-controller-manager [adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece] <==
	I1009 19:25:08.152485       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-446510" podCIDRs=["10.244.0.0/24"]
	I1009 19:25:08.160346       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:25:08.161536       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:25:08.161559       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:25:08.161566       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:25:08.161833       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:25:08.162175       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:25:08.162204       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:25:08.162300       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:25:08.162343       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:25:08.162395       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:25:08.162650       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:25:08.162699       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:25:08.162744       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:25:08.163998       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:25:08.164060       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:25:08.166662       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:25:08.166725       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:25:08.169896       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:25:08.172186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:25:08.180907       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:25:08.189965       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:25:08.198621       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:25:08.219632       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:25:53.118682       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9] <==
	I1009 19:26:06.008949       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:26:06.432394       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:26:07.112796       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:26:07.112836       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:26:07.112919       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:26:07.167812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:26:07.167954       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:26:07.183284       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:26:07.183786       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:26:07.184358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:26:07.189655       1 config.go:200] "Starting service config controller"
	I1009 19:26:07.189728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:26:07.189762       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:26:07.189766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:26:07.189777       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:26:07.189781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:26:07.204381       1 config.go:309] "Starting node config controller"
	I1009 19:26:07.204468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:26:07.204501       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:26:07.290535       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:26:07.290539       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:26:07.290558       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139] <==
	I1009 19:25:09.813532       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:25:09.900441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:25:10.010811       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:25:10.011024       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:25:10.011165       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:25:10.036067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:25:10.036123       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:25:10.040365       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:25:10.040704       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:25:10.040730       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:25:10.043213       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:25:10.043236       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:25:10.043535       1 config.go:200] "Starting service config controller"
	I1009 19:25:10.043555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:25:10.043874       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:25:10.043890       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:25:10.044275       1 config.go:309] "Starting node config controller"
	I1009 19:25:10.044291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:25:10.044299       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:25:10.143425       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 19:25:10.144610       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:25:10.144634       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548] <==
	E1009 19:25:01.442200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:25:01.446358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:25:01.446553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 19:25:01.446696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:25:01.446978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:25:01.448023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:25:01.448095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:25:01.448101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:25:01.448161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:25:01.448202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:25:01.448299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:25:01.448306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:25:02.290179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:25:02.327007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:25:02.339390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:25:02.445094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:25:02.545004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:25:02.617368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 19:25:04.822487       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:25:54.465360       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1009 19:25:54.465381       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1009 19:25:54.465401       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1009 19:25:54.465423       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:25:54.465636       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1009 19:25:54.465651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d] <==
	I1009 19:26:04.258829       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:26:06.934442       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:26:06.934558       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:26:06.934594       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:26:06.934646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:26:06.987498       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:26:06.987537       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:26:06.991958       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:26:06.992218       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:26:06.992245       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:26:06.992273       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:26:07.093169       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.328789    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2d24444a7ec3d0d1416185620fa9a73f" pod="kube-system/kube-scheduler-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: I1009 19:26:02.343835    1312 scope.go:117] "RemoveContainer" containerID="535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344375    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9b9a7fa2600eef322b52876b799827a6" pod="kube-system/etcd-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344544    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2d24444a7ec3d0d1416185620fa9a73f" pod="kube-system/kube-scheduler-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344689    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6dc808328c8ee8432c86074d5e1ec618" pod="kube-system/kube-apiserver-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344835    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jmm4z\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25aa7840-d779-4e2c-9dc2-ce45b5a58dab" pod="kube-system/kindnet-jmm4z"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344995    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clcz6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="738e376b-82bd-49dd-9c74-adde76b723b0" pod="kube-system/kube-proxy-clcz6"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.345136    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-4766q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8f8889d0-e1aa-4b5b-9d6d-863d79f4f451" pod="kube-system/coredns-66bc5c9577-4766q"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: I1009 19:26:02.364636    1312 scope.go:117] "RemoveContainer" containerID="adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.365533    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clcz6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="738e376b-82bd-49dd-9c74-adde76b723b0" pod="kube-system/kube-proxy-clcz6"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.365917    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-4766q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8f8889d0-e1aa-4b5b-9d6d-863d79f4f451" pod="kube-system/coredns-66bc5c9577-4766q"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366094    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9b9a7fa2600eef322b52876b799827a6" pod="kube-system/etcd-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366440    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2d24444a7ec3d0d1416185620fa9a73f" pod="kube-system/kube-scheduler-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366637    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e266b54e9eebb25e8e80a2f6e2c83a55" pod="kube-system/kube-controller-manager-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366789    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6dc808328c8ee8432c86074d5e1ec618" pod="kube-system/kube-apiserver-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366961    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jmm4z\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25aa7840-d779-4e2c-9dc2-ce45b5a58dab" pod="kube-system/kindnet-jmm4z"
	Oct 09 19:26:04 pause-446510 kubelet[1312]: W1009 19:26:04.280478    1312 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.906794    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-446510\" is forbidden: User \"system:node:pause-446510\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" podUID="e266b54e9eebb25e8e80a2f6e2c83a55" pod="kube-system/kube-controller-manager-pause-446510"
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.906977    1312 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-446510\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.907178    1312 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-446510\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.907648    1312 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-446510\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 09 19:26:14 pause-446510 kubelet[1312]: W1009 19:26:14.300916    1312 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 09 19:26:18 pause-446510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:26:18 pause-446510 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:26:18 pause-446510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-446510 -n pause-446510
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-446510 -n pause-446510: exit status 2 (355.048885ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-446510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-446510
helpers_test.go:243: (dbg) docker inspect pause-446510:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c",
	        "Created": "2025-10-09T19:24:33.671116015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 443299,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:24:33.766841099Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/hosts",
	        "LogPath": "/var/lib/docker/containers/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c/2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c-json.log",
	        "Name": "/pause-446510",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-446510:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-446510",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2254f2d1ea8f6a0ea9c7a7e5a7c783b8bd0abf92450f97d2d96c08376dc19c9c",
	                "LowerDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c5319e164d9b4355c934b5f047e12d3a2b0cdbae730328fa26a74601cebcf5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-446510",
	                "Source": "/var/lib/docker/volumes/pause-446510/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-446510",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-446510",
	                "name.minikube.sigs.k8s.io": "pause-446510",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9df2b503fe37b3f1b5846b7d56347188d907f9af324cbab8803642827de80f31",
	            "SandboxKey": "/var/run/docker/netns/9df2b503fe37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-446510": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:0e:45:87:7f:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "51ddeaefb4b54c95c4072f43f62b08889003e6e502e326c9286008b4f2259340",
	                    "EndpointID": "750840acaff65f018f63c17a746b93c04ca3b4a84fadf420f7a65a476208d6fd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-446510",
	                        "2254f2d1ea8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-446510 -n pause-446510
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-446510 -n pause-446510: exit status 2 (352.363565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-446510 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-446510 logs -n 25: (1.3464735s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-034324                                                                                                                   │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ -p NoKubernetes-034324 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ stop    │ -p NoKubernetes-034324                                                                                                                   │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p NoKubernetes-034324 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ -p NoKubernetes-034324 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ delete  │ -p NoKubernetes-034324                                                                                                                   │ NoKubernetes-034324       │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:22 UTC │
	│ delete  │ -p missing-upgrade-636288                                                                                                                │ missing-upgrade-636288    │ jenkins │ v1.37.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:22 UTC │
	│ start   │ -p stopped-upgrade-702726 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-702726    │ jenkins │ v1.32.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:22 UTC │
	│ stop    │ -p kubernetes-upgrade-055159                                                                                                             │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:22 UTC │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:23 UTC │
	│ stop    │ stopped-upgrade-702726 stop                                                                                                              │ stopped-upgrade-702726    │ jenkins │ v1.32.0 │ 09 Oct 25 19:22 UTC │ 09 Oct 25 19:23 UTC │
	│ start   │ -p stopped-upgrade-702726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-702726    │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ delete  │ -p stopped-upgrade-702726                                                                                                                │ stopped-upgrade-702726    │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:23 UTC │
	│ start   │ -p running-upgrade-820547 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-820547    │ jenkins │ v1.32.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │                     │
	│ start   │ -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:23 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p running-upgrade-820547 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-820547    │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:24 UTC │
	│ delete  │ -p running-upgrade-820547                                                                                                                │ running-upgrade-820547    │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:24 UTC │
	│ delete  │ -p kubernetes-upgrade-055159                                                                                                             │ kubernetes-upgrade-055159 │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p pause-446510 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-446510              │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-flag-476949 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │                     │
	│ start   │ -p pause-446510 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-446510              │ jenkins │ v1.37.0 │ 09 Oct 25 19:25 UTC │ 09 Oct 25 19:26 UTC │
	│ pause   │ -p pause-446510 --alsologtostderr -v=5                                                                                                   │ pause-446510              │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:25:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:25:53.161709  447071 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:25:53.161822  447071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:25:53.161832  447071 out.go:374] Setting ErrFile to fd 2...
	I1009 19:25:53.161838  447071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:25:53.162179  447071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:25:53.162537  447071 out.go:368] Setting JSON to false
	I1009 19:25:53.163716  447071 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7705,"bootTime":1760030249,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:25:53.163788  447071 start.go:141] virtualization:  
	I1009 19:25:53.167002  447071 out.go:179] * [pause-446510] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:25:53.170887  447071 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:25:53.170944  447071 notify.go:220] Checking for updates...
	I1009 19:25:53.182417  447071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:25:53.185336  447071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:25:53.188224  447071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:25:53.191077  447071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:25:53.194012  447071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:25:53.197220  447071 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:25:53.197863  447071 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:25:53.225188  447071 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:25:53.225296  447071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:25:53.293137  447071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:25:53.283678598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:25:53.293245  447071 docker.go:318] overlay module found
	I1009 19:25:53.296415  447071 out.go:179] * Using the docker driver based on existing profile
	I1009 19:25:53.299183  447071 start.go:305] selected driver: docker
	I1009 19:25:53.299206  447071 start.go:925] validating driver "docker" against &{Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:25:53.299353  447071 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:25:53.299482  447071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:25:53.354922  447071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:25:53.345894849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:25:53.355313  447071 cni.go:84] Creating CNI manager for ""
	I1009 19:25:53.355379  447071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:25:53.355423  447071 start.go:349] cluster config:
	{Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:25:53.358686  447071 out.go:179] * Starting "pause-446510" primary control-plane node in "pause-446510" cluster
	I1009 19:25:53.361515  447071 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:25:53.364445  447071 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:25:53.367270  447071 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:25:53.367320  447071 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:25:53.367336  447071 cache.go:64] Caching tarball of preloaded images
	I1009 19:25:53.367347  447071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:25:53.367448  447071 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:25:53.367458  447071 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:25:53.367596  447071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/config.json ...
	I1009 19:25:53.387131  447071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:25:53.387155  447071 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:25:53.387174  447071 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:25:53.387197  447071 start.go:360] acquireMachinesLock for pause-446510: {Name:mk846e63e7d5721a4c09542a50933b19f8fd3ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:25:53.387261  447071 start.go:364] duration metric: took 36.973µs to acquireMachinesLock for "pause-446510"
	I1009 19:25:53.387285  447071 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:25:53.387291  447071 fix.go:54] fixHost starting: 
	I1009 19:25:53.387559  447071 cli_runner.go:164] Run: docker container inspect pause-446510 --format={{.State.Status}}
	I1009 19:25:53.404435  447071 fix.go:112] recreateIfNeeded on pause-446510: state=Running err=<nil>
	W1009 19:25:53.404474  447071 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:25:53.407662  447071 out.go:252] * Updating the running docker "pause-446510" container ...
	I1009 19:25:53.407700  447071 machine.go:93] provisionDockerMachine start ...
	I1009 19:25:53.407788  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:53.424924  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:53.425257  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:53.425267  447071 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:25:53.573822  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-446510
	
	I1009 19:25:53.573854  447071 ubuntu.go:182] provisioning hostname "pause-446510"
	I1009 19:25:53.573918  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:53.593505  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:53.593840  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:53.593858  447071 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-446510 && echo "pause-446510" | sudo tee /etc/hostname
	I1009 19:25:53.752053  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-446510
	
	I1009 19:25:53.752130  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:53.770615  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:53.770987  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:53.771010  447071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-446510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-446510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-446510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:25:53.914543  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:25:53.914567  447071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:25:53.914599  447071 ubuntu.go:190] setting up certificates
	I1009 19:25:53.914613  447071 provision.go:84] configureAuth start
	I1009 19:25:53.914673  447071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-446510
	I1009 19:25:53.933117  447071 provision.go:143] copyHostCerts
	I1009 19:25:53.933185  447071 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:25:53.933205  447071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:25:53.933283  447071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:25:53.933386  447071 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:25:53.933401  447071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:25:53.933429  447071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:25:53.933493  447071 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:25:53.933504  447071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:25:53.933531  447071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:25:53.933584  447071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.pause-446510 san=[127.0.0.1 192.168.85.2 localhost minikube pause-446510]
	I1009 19:25:54.090693  447071 provision.go:177] copyRemoteCerts
	I1009 19:25:54.090765  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:25:54.090819  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:54.110790  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:54.218506  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:25:54.238171  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:25:54.261309  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:25:54.284111  447071 provision.go:87] duration metric: took 369.483586ms to configureAuth
	I1009 19:25:54.284139  447071 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:25:54.284361  447071 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:25:54.284474  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:54.302239  447071 main.go:141] libmachine: Using SSH client type: native
	I1009 19:25:54.302552  447071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33395 <nil> <nil>}
	I1009 19:25:54.302572  447071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:25:59.619589  447071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:25:59.619613  447071 machine.go:96] duration metric: took 6.211904521s to provisionDockerMachine
	I1009 19:25:59.619624  447071 start.go:293] postStartSetup for "pause-446510" (driver="docker")
	I1009 19:25:59.619636  447071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:25:59.619711  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:25:59.619759  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.638232  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:59.742091  447071 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:25:59.745899  447071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:25:59.745928  447071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:25:59.745940  447071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:25:59.745995  447071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:25:59.746112  447071 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:25:59.746243  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:25:59.753872  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:25:59.771882  447071 start.go:296] duration metric: took 152.24112ms for postStartSetup
	I1009 19:25:59.771959  447071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:25:59.771998  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.789834  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:59.887528  447071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:25:59.892839  447071 fix.go:56] duration metric: took 6.505541272s for fixHost
	I1009 19:25:59.892866  447071 start.go:83] releasing machines lock for "pause-446510", held for 6.505591512s
	I1009 19:25:59.892950  447071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-446510
	I1009 19:25:59.911296  447071 ssh_runner.go:195] Run: cat /version.json
	I1009 19:25:59.911354  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.911354  447071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:25:59.911429  447071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-446510
	I1009 19:25:59.933567  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:25:59.935135  447071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33395 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/pause-446510/id_rsa Username:docker}
	I1009 19:26:00.067219  447071 ssh_runner.go:195] Run: systemctl --version
	I1009 19:26:00.261370  447071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:26:00.350451  447071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:26:00.356449  447071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:26:00.356523  447071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:26:00.374273  447071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:26:00.374300  447071 start.go:495] detecting cgroup driver to use...
	I1009 19:26:00.374340  447071 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:26:00.374394  447071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:26:00.394759  447071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:26:00.414552  447071 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:26:00.414634  447071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:26:00.433258  447071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:26:00.451066  447071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:26:00.603716  447071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:26:00.740519  447071 docker.go:234] disabling docker service ...
	I1009 19:26:00.740585  447071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:26:00.757054  447071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:26:00.770452  447071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:26:00.896594  447071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:26:01.028648  447071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:26:01.043081  447071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:26:01.057408  447071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:26:01.057496  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.067589  447071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:26:01.067691  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.077069  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.086427  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.095689  447071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:26:01.104072  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.113722  447071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.122719  447071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:26:01.132163  447071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:26:01.140322  447071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:26:01.149042  447071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:01.285149  447071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:26:01.462654  447071 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:26:01.462723  447071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:26:01.466634  447071 start.go:563] Will wait 60s for crictl version
	I1009 19:26:01.466752  447071 ssh_runner.go:195] Run: which crictl
	I1009 19:26:01.470356  447071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:26:01.497419  447071 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:26:01.497504  447071 ssh_runner.go:195] Run: crio --version
	I1009 19:26:01.530007  447071 ssh_runner.go:195] Run: crio --version
	I1009 19:26:01.563093  447071 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:26:01.565978  447071 cli_runner.go:164] Run: docker network inspect pause-446510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:26:01.582660  447071 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:26:01.586664  447071 kubeadm.go:883] updating cluster {Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:26:01.586833  447071 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:26:01.586898  447071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:01.624633  447071 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:01.624658  447071 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:26:01.624715  447071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:26:01.651204  447071 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:26:01.651231  447071 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:26:01.651240  447071 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:26:01.651344  447071 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-446510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:26:01.651428  447071 ssh_runner.go:195] Run: crio config
	I1009 19:26:01.717796  447071 cni.go:84] Creating CNI manager for ""
	I1009 19:26:01.717830  447071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:26:01.717857  447071 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:26:01.717895  447071 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-446510 NodeName:pause-446510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:26:01.718045  447071 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-446510"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:26:01.718158  447071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:26:01.726213  447071 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:26:01.726288  447071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:26:01.733971  447071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1009 19:26:01.746644  447071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:26:01.760838  447071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1009 19:26:01.773590  447071 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:26:01.777331  447071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:01.913115  447071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:26:01.926815  447071 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510 for IP: 192.168.85.2
	I1009 19:26:01.926882  447071 certs.go:195] generating shared ca certs ...
	I1009 19:26:01.926914  447071 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:01.927085  447071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:26:01.927153  447071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:26:01.927175  447071 certs.go:257] generating profile certs ...
	I1009 19:26:01.927298  447071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.key
	I1009 19:26:01.927411  447071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/apiserver.key.2b10b5e4
	I1009 19:26:01.927485  447071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/proxy-client.key
	I1009 19:26:01.927637  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:26:01.927703  447071 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:26:01.927747  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:26:01.927798  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:26:01.927861  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:26:01.927912  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:26:01.928017  447071 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:26:01.928724  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:26:01.949867  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:26:01.969547  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:26:01.987561  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:26:02.006902  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 19:26:02.027586  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:26:02.045720  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:26:02.064756  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:26:02.082661  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:26:02.100962  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:26:02.119542  447071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:26:02.137643  447071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:26:02.150778  447071 ssh_runner.go:195] Run: openssl version
	I1009 19:26:02.157264  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:26:02.166120  447071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:26:02.169939  447071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:26:02.170004  447071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:26:02.212211  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:26:02.220227  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:26:02.229107  447071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:02.233244  447071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:02.233316  447071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:26:02.275321  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:26:02.283546  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:26:02.293364  447071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:26:02.304063  447071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:26:02.304135  447071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:26:02.348069  447071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:26:02.356387  447071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:26:02.365176  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:26:02.428917  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:26:02.503284  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:26:02.651018  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:26:02.739099  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:26:02.803072  447071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:26:02.870846  447071 kubeadm.go:400] StartCluster: {Name:pause-446510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-446510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:26:02.870969  447071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:26:02.871028  447071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:26:02.908452  447071 cri.go:89] found id: "4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247"
	I1009 19:26:02.908474  447071 cri.go:89] found id: "a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d"
	I1009 19:26:02.908479  447071 cri.go:89] found id: "7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5"
	I1009 19:26:02.908484  447071 cri.go:89] found id: "156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5"
	I1009 19:26:02.908488  447071 cri.go:89] found id: "2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9"
	I1009 19:26:02.908498  447071 cri.go:89] found id: "add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64"
	I1009 19:26:02.908502  447071 cri.go:89] found id: "6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c"
	I1009 19:26:02.908513  447071 cri.go:89] found id: "535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	I1009 19:26:02.908517  447071 cri.go:89] found id: "7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139"
	I1009 19:26:02.908523  447071 cri.go:89] found id: "ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261"
	I1009 19:26:02.908527  447071 cri.go:89] found id: "adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	I1009 19:26:02.908531  447071 cri.go:89] found id: "2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548"
	I1009 19:26:02.908534  447071 cri.go:89] found id: "f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158"
	I1009 19:26:02.908537  447071 cri.go:89] found id: "77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f"
	I1009 19:26:02.908540  447071 cri.go:89] found id: ""
	I1009 19:26:02.908588  447071 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:26:02.927031  447071 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:26:02Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:26:02.927111  447071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:26:02.945214  447071 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:26:02.945234  447071 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:26:02.945286  447071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:26:02.953390  447071 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:26:02.953906  447071 kubeconfig.go:125] found "pause-446510" server: "https://192.168.85.2:8443"
	I1009 19:26:02.954470  447071 kapi.go:59] client config for pause-446510: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.key", CAFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:26:02.955005  447071 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:26:02.955027  447071 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:26:02.955034  447071 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:26:02.955042  447071 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:26:02.955046  447071 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:26:02.955315  447071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:26:02.966628  447071 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:26:02.966663  447071 kubeadm.go:601] duration metric: took 21.421929ms to restartPrimaryControlPlane
	I1009 19:26:02.966672  447071 kubeadm.go:402] duration metric: took 95.836296ms to StartCluster
	I1009 19:26:02.966687  447071 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:02.966745  447071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:26:02.967420  447071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:26:02.967661  447071 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:26:02.968004  447071 config.go:182] Loaded profile config "pause-446510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:02.968052  447071 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:26:02.971525  447071 out.go:179] * Verifying Kubernetes components...
	I1009 19:26:02.971604  447071 out.go:179] * Enabled addons: 
	I1009 19:26:02.975449  447071 addons.go:514] duration metric: took 7.381549ms for enable addons: enabled=[]
	I1009 19:26:02.975578  447071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:26:03.223761  447071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:26:03.244580  447071 node_ready.go:35] waiting up to 6m0s for node "pause-446510" to be "Ready" ...
	I1009 19:26:06.944456  447071 node_ready.go:49] node "pause-446510" is "Ready"
	I1009 19:26:06.944490  447071 node_ready.go:38] duration metric: took 3.699882871s for node "pause-446510" to be "Ready" ...
	I1009 19:26:06.944505  447071 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:26:06.944571  447071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:26:06.959256  447071 api_server.go:72] duration metric: took 3.991559226s to wait for apiserver process to appear ...
	I1009 19:26:06.959293  447071 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:26:06.959313  447071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:26:07.017587  447071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:26:07.017684  447071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:26:07.460356  447071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:26:07.469590  447071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:26:07.469622  447071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:26:07.960237  447071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:26:07.977402  447071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:26:07.978837  447071 api_server.go:141] control plane version: v1.34.1
	I1009 19:26:07.978909  447071 api_server.go:131] duration metric: took 1.019607422s to wait for apiserver health ...
	I1009 19:26:07.978946  447071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:26:07.985182  447071 system_pods.go:59] 7 kube-system pods found
	I1009 19:26:07.985275  447071 system_pods.go:61] "coredns-66bc5c9577-4766q" [8f8889d0-e1aa-4b5b-9d6d-863d79f4f451] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:26:07.985306  447071 system_pods.go:61] "etcd-pause-446510" [729678e3-dff0-4a70-9a51-dd43cd08b28f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:26:07.985344  447071 system_pods.go:61] "kindnet-jmm4z" [25aa7840-d779-4e2c-9dc2-ce45b5a58dab] Running
	I1009 19:26:07.985374  447071 system_pods.go:61] "kube-apiserver-pause-446510" [c3a97e07-c22c-40ba-a88f-543cec6496ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:26:07.985422  447071 system_pods.go:61] "kube-controller-manager-pause-446510" [93d3e3bd-7151-490d-82cf-035e8e9022d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:26:07.985448  447071 system_pods.go:61] "kube-proxy-clcz6" [738e376b-82bd-49dd-9c74-adde76b723b0] Running
	I1009 19:26:07.985471  447071 system_pods.go:61] "kube-scheduler-pause-446510" [31319d77-17ab-40ee-aa43-72a6d6f1b565] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:26:07.985512  447071 system_pods.go:74] duration metric: took 6.541997ms to wait for pod list to return data ...
	I1009 19:26:07.985540  447071 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:26:08.045964  447071 default_sa.go:45] found service account: "default"
	I1009 19:26:08.046040  447071 default_sa.go:55] duration metric: took 60.480178ms for default service account to be created ...
	I1009 19:26:08.046064  447071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:26:08.051111  447071 system_pods.go:86] 7 kube-system pods found
	I1009 19:26:08.051197  447071 system_pods.go:89] "coredns-66bc5c9577-4766q" [8f8889d0-e1aa-4b5b-9d6d-863d79f4f451] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:26:08.051226  447071 system_pods.go:89] "etcd-pause-446510" [729678e3-dff0-4a70-9a51-dd43cd08b28f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:26:08.051267  447071 system_pods.go:89] "kindnet-jmm4z" [25aa7840-d779-4e2c-9dc2-ce45b5a58dab] Running
	I1009 19:26:08.051302  447071 system_pods.go:89] "kube-apiserver-pause-446510" [c3a97e07-c22c-40ba-a88f-543cec6496ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:26:08.051328  447071 system_pods.go:89] "kube-controller-manager-pause-446510" [93d3e3bd-7151-490d-82cf-035e8e9022d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:26:08.051374  447071 system_pods.go:89] "kube-proxy-clcz6" [738e376b-82bd-49dd-9c74-adde76b723b0] Running
	I1009 19:26:08.051398  447071 system_pods.go:89] "kube-scheduler-pause-446510" [31319d77-17ab-40ee-aa43-72a6d6f1b565] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:26:08.051437  447071 system_pods.go:126] duration metric: took 5.351359ms to wait for k8s-apps to be running ...
	I1009 19:26:08.051468  447071 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:26:08.051580  447071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:08.067688  447071 system_svc.go:56] duration metric: took 16.212029ms WaitForService to wait for kubelet
	I1009 19:26:08.067774  447071 kubeadm.go:586] duration metric: took 5.100079855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:26:08.067810  447071 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:26:08.071994  447071 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:26:08.072092  447071 node_conditions.go:123] node cpu capacity is 2
	I1009 19:26:08.072122  447071 node_conditions.go:105] duration metric: took 4.290619ms to run NodePressure ...
	I1009 19:26:08.072170  447071 start.go:241] waiting for startup goroutines ...
	I1009 19:26:08.072184  447071 start.go:246] waiting for cluster config update ...
	I1009 19:26:08.072194  447071 start.go:255] writing updated cluster config ...
	I1009 19:26:08.072536  447071 ssh_runner.go:195] Run: rm -f paused
	I1009 19:26:08.076921  447071 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:26:08.077547  447071 kapi.go:59] client config for pause-446510: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/profiles/pause-446510/client.key", CAFile:"/home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:26:08.080734  447071 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4766q" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:26:10.086673  447071 pod_ready.go:104] pod "coredns-66bc5c9577-4766q" is not "Ready", error: <nil>
	W1009 19:26:12.585896  447071 pod_ready.go:104] pod "coredns-66bc5c9577-4766q" is not "Ready", error: <nil>
	I1009 19:26:14.586680  447071 pod_ready.go:94] pod "coredns-66bc5c9577-4766q" is "Ready"
	I1009 19:26:14.586711  447071 pod_ready.go:86] duration metric: took 6.505937028s for pod "coredns-66bc5c9577-4766q" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:14.589948  447071 pod_ready.go:83] waiting for pod "etcd-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.599024  447071 pod_ready.go:94] pod "etcd-pause-446510" is "Ready"
	I1009 19:26:16.599096  447071 pod_ready.go:86] duration metric: took 2.009118091s for pod "etcd-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.602797  447071 pod_ready.go:83] waiting for pod "kube-apiserver-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.611325  447071 pod_ready.go:94] pod "kube-apiserver-pause-446510" is "Ready"
	I1009 19:26:16.611410  447071 pod_ready.go:86] duration metric: took 8.543411ms for pod "kube-apiserver-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.615045  447071 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.624181  447071 pod_ready.go:94] pod "kube-controller-manager-pause-446510" is "Ready"
	I1009 19:26:16.624258  447071 pod_ready.go:86] duration metric: took 9.146119ms for pod "kube-controller-manager-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.629496  447071 pod_ready.go:83] waiting for pod "kube-proxy-clcz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:16.984147  447071 pod_ready.go:94] pod "kube-proxy-clcz6" is "Ready"
	I1009 19:26:16.984174  447071 pod_ready.go:86] duration metric: took 354.61207ms for pod "kube-proxy-clcz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:17.186050  447071 pod_ready.go:83] waiting for pod "kube-scheduler-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:17.584704  447071 pod_ready.go:94] pod "kube-scheduler-pause-446510" is "Ready"
	I1009 19:26:17.584784  447071 pod_ready.go:86] duration metric: took 398.705957ms for pod "kube-scheduler-pause-446510" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:26:17.584823  447071 pod_ready.go:40] duration metric: took 9.507856963s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:26:17.671192  447071 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:26:17.675482  447071 out.go:179] * Done! kubectl is now configured to use "pause-446510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.557333018Z" level=info msg="Created container a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d: kube-system/kube-scheduler-pause-446510/kube-scheduler" id=687d6956-7f13-4b70-8fed-bb9f317ae3cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.558694891Z" level=info msg="Starting container: a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d" id=46243551-d12a-4d38-b213-65737b3824d4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.566664608Z" level=info msg="Started container" PID=2291 containerID=2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9 description=kube-system/kube-proxy-clcz6/kube-proxy id=39ddc8a8-3ef8-443b-a9cb-8739b002d15a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8d4f950a27fb273ee38019c7deda49b55437b141a0b7df8ca9d8828ca164fae
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.581539954Z" level=info msg="Created container 156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5: kube-system/kindnet-jmm4z/kindnet-cni" id=6f755dff-e6e5-43fc-90e8-17cb9241bc74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.585225231Z" level=info msg="Starting container: 156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5" id=580f0df8-e4c2-45ef-9d56-eef77f40c3da name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.61220905Z" level=info msg="Started container" PID=2315 containerID=a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d description=kube-system/kube-scheduler-pause-446510/kube-scheduler id=46243551-d12a-4d38-b213-65737b3824d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bd4c37cb098a361d5e58bf64b2b8b60fef80c5e929c48e32ae457dec50f54fe
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.632517701Z" level=info msg="Started container" PID=2314 containerID=156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5 description=kube-system/kindnet-jmm4z/kindnet-cni id=580f0df8-e4c2-45ef-9d56-eef77f40c3da name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bc6ca5eaa699340634edc38c40860af490ab90e1e398eefd48bda16b303ac4b
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.678747223Z" level=info msg="Created container 4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247: kube-system/kube-controller-manager-pause-446510/kube-controller-manager" id=ee02889d-040e-4242-962a-2330122d9967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.679409197Z" level=info msg="Starting container: 4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247" id=b5cf1474-830f-4410-9fa9-98dee9ade4c3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:26:02 pause-446510 crio[2059]: time="2025-10-09T19:26:02.684065195Z" level=info msg="Started container" PID=2335 containerID=4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247 description=kube-system/kube-controller-manager-pause-446510/kube-controller-manager id=b5cf1474-830f-4410-9fa9-98dee9ade4c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e58ccfa9d182fd450fa34026c190674d8e600ab8dedaf18ea1aca78cb3b72138
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.03434818Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.038163535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.038200647Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.038223121Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.041253727Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.041289477Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.041312887Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.044541936Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.044584817Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.044609761Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.047892283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.047927205Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.047951533Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.051155581Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:26:13 pause-446510 crio[2059]: time="2025-10-09T19:26:13.051192973Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4c2452d6f36d6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   e58ccfa9d182f       kube-controller-manager-pause-446510   kube-system
	a6f5fd6f4bea9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   3bd4c37cb098a       kube-scheduler-pause-446510            kube-system
	7c1221bd7d59b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   bf9887fd41187       coredns-66bc5c9577-4766q               kube-system
	156adacbda5a7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   6bc6ca5eaa699       kindnet-jmm4z                          kube-system
	2bc102c3bd634       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   f8d4f950a27fb       kube-proxy-clcz6                       kube-system
	add9e3f1f95ea       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   000d837fe2be5       etcd-pause-446510                      kube-system
	6325f53ac167e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   407a91e13c50c       kube-apiserver-pause-446510            kube-system
	535e956a01dd2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   bf9887fd41187       coredns-66bc5c9577-4766q               kube-system
	7de28c39361bb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   f8d4f950a27fb       kube-proxy-clcz6                       kube-system
	ccdea48a947e3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   6bc6ca5eaa699       kindnet-jmm4z                          kube-system
	adff605a7c65b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   e58ccfa9d182f       kube-controller-manager-pause-446510   kube-system
	2957ca2678142       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   3bd4c37cb098a       kube-scheduler-pause-446510            kube-system
	f0c59f52cc589       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   000d837fe2be5       etcd-pause-446510                      kube-system
	77d00a2936d94       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   407a91e13c50c       kube-apiserver-pause-446510            kube-system
	
	
	==> coredns [535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48710 - 7967 "HINFO IN 4988741048091739230.5390373744845976817. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014183299s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c1221bd7d59b30498b0d2aaec6f37079059cc5bb4843c961406bad3605561b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55430 - 8399 "HINFO IN 1930796750062550998.7867124658167409791. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035398695s
	
	
	==> describe nodes <==
	Name:               pause-446510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-446510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=pause-446510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_25_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:25:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-446510
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:24:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:24:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:24:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:25:50 +0000   Thu, 09 Oct 2025 19:25:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-446510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7335011f64e0455987ebbdbb40738a9d
	  System UUID:                8a10f5e3-c4bc-4ded-a988-ed05ff3fb3ee
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4766q                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-446510                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-jmm4z                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-pause-446510             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-446510    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-clcz6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-446510             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 73s   kube-proxy       
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-446510 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-446510 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-446510 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s   node-controller  Node pause-446510 event: Registered Node pause-446510 in Controller
	  Normal   NodeReady                33s   kubelet          Node pause-446510 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-446510 event: Registered Node pause-446510 in Controller
	
	
	==> dmesg <==
	[Oct 9 18:56] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:57] overlayfs: idmapped layers are currently not supported
	[  +4.128207] overlayfs: idmapped layers are currently not supported
	[Oct 9 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [add9e3f1f95ea9b58263d42fb6df43549754a46ca9fa42ed834cabacc4a50b64] <==
	{"level":"warn","ts":"2025-10-09T19:26:05.293209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.301842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.328543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.346851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.366954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.389139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.412482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.428191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.450689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.462701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.509743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.512350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.528374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.544435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.563118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.624753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.670228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.671436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.695838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.721767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.763104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.819028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.846213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.870382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:26:05.929059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	
	
	==> etcd [f0c59f52cc589600adb202824ffa9854d5377ef7e264dff8b2fba0a925be1158] <==
	{"level":"warn","ts":"2025-10-09T19:25:00.070180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.083435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.117778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.150867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.178829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.195757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:25:00.333120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38120","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T19:25:54.478379Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-09T19:25:54.478450Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-446510","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-09T19:25:54.479071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T19:25:54.619349Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619517Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619566Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T19:25:54.619579Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-09T19:25:54.619538Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619685Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T19:25:54.619720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-09T19:25:54.619768Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-10-09T19:25:54.619780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:25:54.619813Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-09T19:25:54.619823Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-09T19:25:54.623426Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-09T19:25:54.623505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:25:54.623540Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:25:54.623547Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-446510","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 19:26:23 up  2:08,  0 user,  load average: 2.47, 3.09, 2.52
	Linux pause-446510 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [156adacbda5a7ee47595d38d03d7cc3df76e52ad547914b5020259bea4ad6dc5] <==
	I1009 19:26:02.737157       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:26:02.819571       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:26:02.819707       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:26:02.819720       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:26:02.819731       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:26:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:26:03.034121       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:26:03.034263       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:26:03.045139       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:26:03.046196       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 19:26:07.045982       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:26:07.046102       1 metrics.go:72] Registering metrics
	I1009 19:26:07.046208       1 controller.go:711] "Syncing nftables rules"
	I1009 19:26:13.033835       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:26:13.033986       1 main.go:301] handling current node
	I1009 19:26:23.034358       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:26:23.034396       1 main.go:301] handling current node
	
	
	==> kindnet [ccdea48a947e34466622a338db5c1e0fd172d754df8a2c4a985d7333b7953261] <==
	I1009 19:25:09.916694       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:25:09.916939       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:25:09.917094       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:25:09.917112       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:25:09.917122       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:25:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:25:10.120471       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:25:10.121326       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:25:10.121449       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:25:10.121608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:25:40.120559       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:25:40.121564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:25:40.121574       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:25:40.121676       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1009 19:25:41.821979       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:25:41.822013       1 metrics.go:72] Registering metrics
	I1009 19:25:41.822094       1 controller.go:711] "Syncing nftables rules"
	I1009 19:25:50.126927       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:25:50.126971       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6325f53ac167ebc0b8c6efb726fba8ecc26b032658b6717bd74fc3b378a8531c] <==
	I1009 19:26:06.926982       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:26:06.962867       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:26:06.965113       1 policy_source.go:240] refreshing policies
	I1009 19:26:06.965281       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:26:06.965395       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:26:06.965445       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:26:06.966065       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:26:06.966949       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:26:06.970536       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:26:06.971124       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:26:06.976423       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:26:06.992941       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:26:07.010514       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:26:07.026284       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:26:07.026813       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:26:07.029428       1 cache.go:39] Caches are synced for autoregister controller
	E1009 19:26:07.047101       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:26:07.057594       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 19:26:07.062649       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:26:07.661047       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:26:07.961818       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:26:09.366773       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:26:09.451515       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:26:09.603001       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:26:09.754309       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [77d00a2936d94a965bf028535622c2c9ceb672a420958d449403171c36fb451f] <==
	W1009 19:25:54.489586       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489644       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489703       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489760       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489830       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489904       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.489960       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490006       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490055       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490107       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490445       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490494       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490546       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490604       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490647       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.490689       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491173       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491281       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491371       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491455       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491559       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491613       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491684       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491724       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:25:54.491743       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c2452d6f36d69eb02ac958d08bfe6e5ccdc52ec579ac18d2917f6f9b90b9247] <==
	I1009 19:26:09.367972       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:26:09.370210       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:26:09.374439       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:26:09.382903       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:26:09.383122       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:26:09.388224       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:26:09.388386       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:26:09.395065       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:26:09.395175       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:26:09.395410       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:26:09.395495       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:26:09.395544       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:26:09.398217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:26:09.398348       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:26:09.398362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 19:26:09.398382       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:26:09.398401       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:26:09.398409       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:26:09.398416       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:26:09.408733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:26:09.412862       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:26:09.417104       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:26:09.420428       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:26:09.426745       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:26:09.429995       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-controller-manager [adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece] <==
	I1009 19:25:08.152485       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-446510" podCIDRs=["10.244.0.0/24"]
	I1009 19:25:08.160346       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:25:08.161536       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:25:08.161559       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:25:08.161566       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:25:08.161833       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:25:08.162175       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:25:08.162204       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:25:08.162300       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:25:08.162343       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:25:08.162395       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:25:08.162650       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:25:08.162699       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:25:08.162744       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:25:08.163998       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:25:08.164060       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:25:08.166662       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:25:08.166725       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:25:08.169896       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:25:08.172186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:25:08.180907       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:25:08.189965       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:25:08.198621       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:25:08.219632       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:25:53.118682       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2bc102c3bd634215c8115551a0b99dc9933b0fccee60af2ea882179f5cc0f8c9] <==
	I1009 19:26:06.008949       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:26:06.432394       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:26:07.112796       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:26:07.112836       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:26:07.112919       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:26:07.167812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:26:07.167954       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:26:07.183284       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:26:07.183786       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:26:07.184358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:26:07.189655       1 config.go:200] "Starting service config controller"
	I1009 19:26:07.189728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:26:07.189762       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:26:07.189766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:26:07.189777       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:26:07.189781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:26:07.204381       1 config.go:309] "Starting node config controller"
	I1009 19:26:07.204468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:26:07.204501       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:26:07.290535       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:26:07.290539       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:26:07.290558       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7de28c39361bb6f03c0ac5f94db633d3df7c60a98b79761f0dd3da2e1d048139] <==
	I1009 19:25:09.813532       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:25:09.900441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:25:10.010811       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:25:10.011024       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:25:10.011165       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:25:10.036067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:25:10.036123       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:25:10.040365       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:25:10.040704       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:25:10.040730       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:25:10.043213       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:25:10.043236       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:25:10.043535       1 config.go:200] "Starting service config controller"
	I1009 19:25:10.043555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:25:10.043874       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:25:10.043890       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:25:10.044275       1 config.go:309] "Starting node config controller"
	I1009 19:25:10.044291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:25:10.044299       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:25:10.143425       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 19:25:10.144610       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:25:10.144634       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2957ca26781427c6cacc0b8aa15dae567df148aa2ae15e0f6d497fd8efa73548] <==
	E1009 19:25:01.442200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:25:01.446358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:25:01.446553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 19:25:01.446696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:25:01.446978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:25:01.448023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:25:01.448095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:25:01.448101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:25:01.448161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:25:01.448202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:25:01.448299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:25:01.448306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:25:02.290179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:25:02.327007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:25:02.339390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:25:02.445094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:25:02.545004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:25:02.617368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 19:25:04.822487       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:25:54.465360       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1009 19:25:54.465381       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1009 19:25:54.465401       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1009 19:25:54.465423       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:25:54.465636       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1009 19:25:54.465651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a6f5fd6f4bea9fd4b97c0d26c6a3fa86cf586c63afc8d2d4b8416f3dcd47294d] <==
	I1009 19:26:04.258829       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:26:06.934442       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:26:06.934558       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:26:06.934594       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:26:06.934646       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:26:06.987498       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:26:06.987537       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:26:06.991958       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:26:06.992218       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:26:06.992245       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:26:06.992273       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:26:07.093169       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.328789    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2d24444a7ec3d0d1416185620fa9a73f" pod="kube-system/kube-scheduler-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: I1009 19:26:02.343835    1312 scope.go:117] "RemoveContainer" containerID="535e956a01dd2ee2c226e2f1d33a85c572410afa635f7652c0a09176ba5497f3"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344375    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9b9a7fa2600eef322b52876b799827a6" pod="kube-system/etcd-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344544    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2d24444a7ec3d0d1416185620fa9a73f" pod="kube-system/kube-scheduler-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344689    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6dc808328c8ee8432c86074d5e1ec618" pod="kube-system/kube-apiserver-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344835    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jmm4z\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25aa7840-d779-4e2c-9dc2-ce45b5a58dab" pod="kube-system/kindnet-jmm4z"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.344995    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clcz6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="738e376b-82bd-49dd-9c74-adde76b723b0" pod="kube-system/kube-proxy-clcz6"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.345136    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-4766q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8f8889d0-e1aa-4b5b-9d6d-863d79f4f451" pod="kube-system/coredns-66bc5c9577-4766q"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: I1009 19:26:02.364636    1312 scope.go:117] "RemoveContainer" containerID="adff605a7c65b3da1ff45fc7f3d4b815ae5ef6f5e5b5c572df2a48a654062ece"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.365533    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-clcz6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="738e376b-82bd-49dd-9c74-adde76b723b0" pod="kube-system/kube-proxy-clcz6"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.365917    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-4766q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8f8889d0-e1aa-4b5b-9d6d-863d79f4f451" pod="kube-system/coredns-66bc5c9577-4766q"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366094    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9b9a7fa2600eef322b52876b799827a6" pod="kube-system/etcd-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366440    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2d24444a7ec3d0d1416185620fa9a73f" pod="kube-system/kube-scheduler-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366637    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e266b54e9eebb25e8e80a2f6e2c83a55" pod="kube-system/kube-controller-manager-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366789    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-446510\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6dc808328c8ee8432c86074d5e1ec618" pod="kube-system/kube-apiserver-pause-446510"
	Oct 09 19:26:02 pause-446510 kubelet[1312]: E1009 19:26:02.366961    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-jmm4z\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25aa7840-d779-4e2c-9dc2-ce45b5a58dab" pod="kube-system/kindnet-jmm4z"
	Oct 09 19:26:04 pause-446510 kubelet[1312]: W1009 19:26:04.280478    1312 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.906794    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-446510\" is forbidden: User \"system:node:pause-446510\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" podUID="e266b54e9eebb25e8e80a2f6e2c83a55" pod="kube-system/kube-controller-manager-pause-446510"
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.906977    1312 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-446510\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.907178    1312 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-446510\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 09 19:26:06 pause-446510 kubelet[1312]: E1009 19:26:06.907648    1312 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-446510\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-446510' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 09 19:26:14 pause-446510 kubelet[1312]: W1009 19:26:14.300916    1312 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 09 19:26:18 pause-446510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:26:18 pause-446510 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:26:18 pause-446510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-446510 -n pause-446510
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-446510 -n pause-446510: exit status 2 (376.044611ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-446510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (353.419914ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:36:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-271815 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-271815 describe deploy/metrics-server -n kube-system: exit status 1 (128.267373ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-271815 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-271815
helpers_test.go:243: (dbg) docker inspect old-k8s-version-271815:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980",
	        "Created": "2025-10-09T19:35:50.362074272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 461195,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:35:50.428460105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/hostname",
	        "HostsPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/hosts",
	        "LogPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980-json.log",
	        "Name": "/old-k8s-version-271815",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-271815:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-271815",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980",
	                "LowerDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-271815",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-271815/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-271815",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-271815",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-271815",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "833e1238e7be6c13b9b39335f21432c7e7fc8041fa6935ce950e94af499aa7ee",
	            "SandboxKey": "/var/run/docker/netns/833e1238e7be",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-271815": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:18:bb:a1:10:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9b70e298602c2790a8fd04817e351fbf4f06c3fbce53648b556f8d8aa63fa4cc",
	                    "EndpointID": "4b5bab8c6c2b0c18dab3b095c060e395e809dd77b827298b922e29b933bd77b5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-271815",
	                        "395bb50f3c39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-271815 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-271815 logs -n 25: (2.130043007s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-224541 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo containerd config dump                                                                                                                                                                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo crio config                                                                                                                                                                                                             │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ delete  │ -p cilium-224541                                                                                                                                                                                                                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ start   │ -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ force-systemd-flag-476949 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-flag-476949                                                                                                                                                                                                                  │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:36:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:36:44.245433  463454 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:36:44.245548  463454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:36:44.245552  463454 out.go:374] Setting ErrFile to fd 2...
	I1009 19:36:44.245556  463454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:36:44.245910  463454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:36:44.246580  463454 out.go:368] Setting JSON to false
	I1009 19:36:44.247617  463454 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8356,"bootTime":1760030249,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:36:44.247676  463454 start.go:141] virtualization:  
	I1009 19:36:44.249200  463454 out.go:179] * [cert-expiration-259172] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:36:44.250279  463454 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:36:44.250414  463454 notify.go:220] Checking for updates...
	I1009 19:36:44.254064  463454 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:36:44.256661  463454 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:36:44.258856  463454 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:36:44.261754  463454 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:36:44.263906  463454 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:36:44.265822  463454 config.go:182] Loaded profile config "cert-expiration-259172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:36:44.266417  463454 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:36:44.303649  463454 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:36:44.303771  463454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:36:44.367068  463454 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:36:44.357941821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:36:44.367170  463454 docker.go:318] overlay module found
	I1009 19:36:44.368647  463454 out.go:179] * Using the docker driver based on existing profile
	I1009 19:36:44.369724  463454 start.go:305] selected driver: docker
	I1009 19:36:44.369730  463454 start.go:925] validating driver "docker" against &{Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:36:44.369827  463454 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:36:44.370637  463454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:36:44.424194  463454 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:36:44.41420243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:36:44.424486  463454 cni.go:84] Creating CNI manager for ""
	I1009 19:36:44.424560  463454 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:36:44.424604  463454 start.go:349] cluster config:
	{Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1009 19:36:44.425964  463454 out.go:179] * Starting "cert-expiration-259172" primary control-plane node in "cert-expiration-259172" cluster
	I1009 19:36:44.426970  463454 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:36:44.428099  463454 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:36:44.429320  463454 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:36:44.429351  463454 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:36:44.429358  463454 cache.go:64] Caching tarball of preloaded images
	I1009 19:36:44.429398  463454 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:36:44.429439  463454 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:36:44.429448  463454 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:36:44.429556  463454 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/config.json ...
	I1009 19:36:44.449120  463454 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:36:44.449130  463454 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:36:44.449142  463454 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:36:44.449170  463454 start.go:360] acquireMachinesLock for cert-expiration-259172: {Name:mk65f125499ece3a4312e2f3a76b34efae63b1d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:36:44.449218  463454 start.go:364] duration metric: took 33.288µs to acquireMachinesLock for "cert-expiration-259172"
	I1009 19:36:44.449236  463454 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:36:44.449240  463454 fix.go:54] fixHost starting: 
	I1009 19:36:44.449486  463454 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:36:44.467642  463454 fix.go:112] recreateIfNeeded on cert-expiration-259172: state=Running err=<nil>
	W1009 19:36:44.467661  463454 fix.go:138] unexpected machine state, will restart: <nil>
	W1009 19:36:44.290693  460802 node_ready.go:57] node "old-k8s-version-271815" has "Ready":"False" status (will retry)
	I1009 19:36:44.789742  460802 node_ready.go:49] node "old-k8s-version-271815" is "Ready"
	I1009 19:36:44.789773  460802 node_ready.go:38] duration metric: took 12.503075671s for node "old-k8s-version-271815" to be "Ready" ...
	I1009 19:36:44.789787  460802 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:36:44.789856  460802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:36:44.806872  460802 api_server.go:72] duration metric: took 14.266593862s to wait for apiserver process to appear ...
	I1009 19:36:44.806897  460802 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:36:44.806919  460802 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:36:44.815659  460802 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:36:44.816999  460802 api_server.go:141] control plane version: v1.28.0
	I1009 19:36:44.817027  460802 api_server.go:131] duration metric: took 10.122677ms to wait for apiserver health ...
	I1009 19:36:44.817036  460802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:36:44.821166  460802 system_pods.go:59] 8 kube-system pods found
	I1009 19:36:44.821205  460802 system_pods.go:61] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:36:44.821214  460802 system_pods.go:61] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running
	I1009 19:36:44.821220  460802 system_pods.go:61] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:36:44.821225  460802 system_pods.go:61] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running
	I1009 19:36:44.821231  460802 system_pods.go:61] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running
	I1009 19:36:44.821235  460802 system_pods.go:61] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:36:44.821248  460802 system_pods.go:61] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running
	I1009 19:36:44.821259  460802 system_pods.go:61] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:36:44.821272  460802 system_pods.go:74] duration metric: took 4.230954ms to wait for pod list to return data ...
	I1009 19:36:44.821281  460802 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:36:44.823837  460802 default_sa.go:45] found service account: "default"
	I1009 19:36:44.823872  460802 default_sa.go:55] duration metric: took 2.585365ms for default service account to be created ...
	I1009 19:36:44.823881  460802 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:36:44.827661  460802 system_pods.go:86] 8 kube-system pods found
	I1009 19:36:44.827696  460802 system_pods.go:89] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:36:44.827702  460802 system_pods.go:89] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running
	I1009 19:36:44.827708  460802 system_pods.go:89] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:36:44.827712  460802 system_pods.go:89] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running
	I1009 19:36:44.827717  460802 system_pods.go:89] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running
	I1009 19:36:44.827720  460802 system_pods.go:89] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:36:44.827724  460802 system_pods.go:89] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running
	I1009 19:36:44.827730  460802 system_pods.go:89] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:36:44.827756  460802 retry.go:31] will retry after 267.478759ms: missing components: kube-dns
	I1009 19:36:45.138021  460802 system_pods.go:86] 8 kube-system pods found
	I1009 19:36:45.138063  460802 system_pods.go:89] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:36:45.138072  460802 system_pods.go:89] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running
	I1009 19:36:45.138080  460802 system_pods.go:89] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:36:45.138084  460802 system_pods.go:89] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running
	I1009 19:36:45.138090  460802 system_pods.go:89] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running
	I1009 19:36:45.138094  460802 system_pods.go:89] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:36:45.138098  460802 system_pods.go:89] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running
	I1009 19:36:45.138103  460802 system_pods.go:89] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Running
	I1009 19:36:45.138113  460802 system_pods.go:126] duration metric: took 314.223561ms to wait for k8s-apps to be running ...
	I1009 19:36:45.138121  460802 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:36:45.138211  460802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:36:45.162727  460802 system_svc.go:56] duration metric: took 24.589483ms WaitForService to wait for kubelet
	I1009 19:36:45.162762  460802 kubeadm.go:586] duration metric: took 14.622489206s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:36:45.162789  460802 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:36:45.185916  460802 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:36:45.185962  460802 node_conditions.go:123] node cpu capacity is 2
	I1009 19:36:45.185979  460802 node_conditions.go:105] duration metric: took 23.183938ms to run NodePressure ...
	I1009 19:36:45.185992  460802 start.go:241] waiting for startup goroutines ...
	I1009 19:36:45.186000  460802 start.go:246] waiting for cluster config update ...
	I1009 19:36:45.186014  460802 start.go:255] writing updated cluster config ...
	I1009 19:36:45.186381  460802 ssh_runner.go:195] Run: rm -f paused
	I1009 19:36:45.191875  460802 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:36:45.201050  460802 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ftv2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.209653  460802 pod_ready.go:94] pod "coredns-5dd5756b68-ftv2x" is "Ready"
	I1009 19:36:46.209682  460802 pod_ready.go:86] duration metric: took 1.008596222s for pod "coredns-5dd5756b68-ftv2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.214016  460802 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.220405  460802 pod_ready.go:94] pod "etcd-old-k8s-version-271815" is "Ready"
	I1009 19:36:46.220431  460802 pod_ready.go:86] duration metric: took 6.332326ms for pod "etcd-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.224704  460802 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.233481  460802 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-271815" is "Ready"
	I1009 19:36:46.233548  460802 pod_ready.go:86] duration metric: took 8.750453ms for pod "kube-apiserver-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.238301  460802 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.405554  460802 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-271815" is "Ready"
	I1009 19:36:46.405585  460802 pod_ready.go:86] duration metric: took 167.199755ms for pod "kube-controller-manager-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:46.607107  460802 pod_ready.go:83] waiting for pod "kube-proxy-7j6jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:47.005475  460802 pod_ready.go:94] pod "kube-proxy-7j6jw" is "Ready"
	I1009 19:36:47.005515  460802 pod_ready.go:86] duration metric: took 398.366942ms for pod "kube-proxy-7j6jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:47.206962  460802 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:47.605390  460802 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-271815" is "Ready"
	I1009 19:36:47.605419  460802 pod_ready.go:86] duration metric: took 398.42617ms for pod "kube-scheduler-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:36:47.605432  460802 pod_ready.go:40] duration metric: took 2.41351654s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:36:47.658020  460802 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1009 19:36:47.659411  460802 out.go:203] 
	W1009 19:36:47.660664  460802 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1009 19:36:47.662099  460802 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1009 19:36:47.663857  460802 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-271815" cluster and "default" namespace by default
	I1009 19:36:44.469067  463454 out.go:252] * Updating the running docker "cert-expiration-259172" container ...
	I1009 19:36:44.469086  463454 machine.go:93] provisionDockerMachine start ...
	I1009 19:36:44.469179  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:44.497356  463454 main.go:141] libmachine: Using SSH client type: native
	I1009 19:36:44.497668  463454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:36:44.497675  463454 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:36:44.662385  463454 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-259172
	
	I1009 19:36:44.662404  463454 ubuntu.go:182] provisioning hostname "cert-expiration-259172"
	I1009 19:36:44.662462  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:44.680103  463454 main.go:141] libmachine: Using SSH client type: native
	I1009 19:36:44.680467  463454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:36:44.680477  463454 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-259172 && echo "cert-expiration-259172" | sudo tee /etc/hostname
	I1009 19:36:44.864625  463454 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-259172
	
	I1009 19:36:44.864702  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:44.887863  463454 main.go:141] libmachine: Using SSH client type: native
	I1009 19:36:44.888169  463454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:36:44.888183  463454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-259172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-259172/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-259172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:36:45.099041  463454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:36:45.099058  463454 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:36:45.099077  463454 ubuntu.go:190] setting up certificates
	I1009 19:36:45.099088  463454 provision.go:84] configureAuth start
	I1009 19:36:45.099155  463454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-259172
	I1009 19:36:45.133659  463454 provision.go:143] copyHostCerts
	I1009 19:36:45.133722  463454 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:36:45.133737  463454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:36:45.133822  463454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:36:45.133930  463454 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:36:45.133934  463454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:36:45.133964  463454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:36:45.134023  463454 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:36:45.134026  463454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:36:45.134050  463454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:36:45.134098  463454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-259172 san=[127.0.0.1 192.168.76.2 cert-expiration-259172 localhost minikube]
	I1009 19:36:45.378092  463454 provision.go:177] copyRemoteCerts
	I1009 19:36:45.378171  463454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:36:45.378213  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:45.399439  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:45.511775  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:36:45.530289  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:36:45.548288  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:36:45.567293  463454 provision.go:87] duration metric: took 468.183034ms to configureAuth
	I1009 19:36:45.567310  463454 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:36:45.567498  463454 config.go:182] Loaded profile config "cert-expiration-259172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:36:45.567603  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:45.585348  463454 main.go:141] libmachine: Using SSH client type: native
	I1009 19:36:45.585637  463454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1009 19:36:45.585648  463454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:36:50.945044  463454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:36:50.945057  463454 machine.go:96] duration metric: took 6.475965656s to provisionDockerMachine
	I1009 19:36:50.945067  463454 start.go:293] postStartSetup for "cert-expiration-259172" (driver="docker")
	I1009 19:36:50.945076  463454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:36:50.945142  463454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:36:50.945187  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:50.969613  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:51.082771  463454 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:36:51.086464  463454 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:36:51.086482  463454 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:36:51.086492  463454 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:36:51.086551  463454 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:36:51.086629  463454 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:36:51.086736  463454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:36:51.095061  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:36:51.116179  463454 start.go:296] duration metric: took 171.088769ms for postStartSetup
	I1009 19:36:51.116256  463454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:36:51.116329  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:51.135988  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:51.236567  463454 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:36:51.241646  463454 fix.go:56] duration metric: took 6.792399418s for fixHost
	I1009 19:36:51.241661  463454 start.go:83] releasing machines lock for "cert-expiration-259172", held for 6.792436309s
	I1009 19:36:51.241728  463454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-259172
	I1009 19:36:51.265828  463454 ssh_runner.go:195] Run: cat /version.json
	I1009 19:36:51.265862  463454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:36:51.265871  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:51.265922  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:51.286740  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:51.291801  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:51.478176  463454 ssh_runner.go:195] Run: systemctl --version
	I1009 19:36:51.484674  463454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:36:51.530290  463454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:36:51.535790  463454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:36:51.535853  463454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:36:51.543693  463454 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:36:51.543706  463454 start.go:495] detecting cgroup driver to use...
	I1009 19:36:51.543745  463454 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:36:51.543790  463454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:36:51.559546  463454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:36:51.573406  463454 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:36:51.573456  463454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:36:51.589554  463454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:36:51.604372  463454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:36:51.756938  463454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:36:51.907716  463454 docker.go:234] disabling docker service ...
	I1009 19:36:51.907777  463454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:36:51.924206  463454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:36:51.937117  463454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:36:52.096565  463454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:36:52.262806  463454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:36:52.278514  463454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:36:52.293370  463454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:36:52.293424  463454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.302887  463454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:36:52.302942  463454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.312796  463454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.322297  463454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.331293  463454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:36:52.339931  463454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.349091  463454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.358328  463454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:36:52.367048  463454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:36:52.374832  463454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:36:52.382155  463454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:36:52.533413  463454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:36:52.708199  463454 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:36:52.708274  463454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:36:52.712072  463454 start.go:563] Will wait 60s for crictl version
	I1009 19:36:52.712122  463454 ssh_runner.go:195] Run: which crictl
	I1009 19:36:52.715660  463454 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:36:52.742244  463454 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:36:52.742317  463454 ssh_runner.go:195] Run: crio --version
	I1009 19:36:52.774275  463454 ssh_runner.go:195] Run: crio --version
	I1009 19:36:52.806949  463454 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:36:52.809848  463454 cli_runner.go:164] Run: docker network inspect cert-expiration-259172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:36:52.825566  463454 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:36:52.829591  463454 kubeadm.go:883] updating cluster {Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:36:52.829703  463454 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:36:52.829760  463454 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:36:52.867624  463454 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:36:52.867636  463454 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:36:52.867692  463454 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:36:52.892164  463454 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:36:52.892176  463454 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:36:52.892182  463454 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:36:52.892270  463454 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-259172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:36:52.892346  463454 ssh_runner.go:195] Run: crio config
	I1009 19:36:52.948398  463454 cni.go:84] Creating CNI manager for ""
	I1009 19:36:52.948410  463454 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:36:52.948425  463454 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:36:52.948447  463454 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-259172 NodeName:cert-expiration-259172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:36:52.948580  463454 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-259172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:36:52.948646  463454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:36:52.956432  463454 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:36:52.956491  463454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:36:52.963854  463454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1009 19:36:52.977334  463454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:36:52.993091  463454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1009 19:36:53.007111  463454 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:36:53.011353  463454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:36:53.161077  463454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:36:53.176509  463454 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172 for IP: 192.168.76.2
	I1009 19:36:53.176534  463454 certs.go:195] generating shared ca certs ...
	I1009 19:36:53.176549  463454 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:53.176691  463454 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:36:53.176730  463454 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:36:53.176736  463454 certs.go:257] generating profile certs ...
	W1009 19:36:53.176849  463454 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1009 19:36:53.177050  463454 certs.go:624] cert expired /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt: expiration: 2025-10-09 19:36:21 +0000 UTC, now: 2025-10-09 19:36:53.177043956 +0000 UTC m=+8.980868600
	I1009 19:36:53.177150  463454 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.key
	I1009 19:36:53.177168  463454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt with IP's: []
	I1009 19:36:53.468346  463454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt ...
	I1009 19:36:53.468363  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.crt: {Name:mk3ab17a25e8e728114875e4b4dda201c2beeec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:53.468488  463454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.key ...
	I1009 19:36:53.468494  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/client.key: {Name:mka50b8c2ad287542113d96be2edd7ab97f3f9c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1009 19:36:53.468679  463454 out.go:285] ! Certificate apiserver.crt.73a2860b has expired. Generating a new one...
	I1009 19:36:53.468699  463454 certs.go:624] cert expired /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b: expiration: 2025-10-09 19:36:21 +0000 UTC, now: 2025-10-09 19:36:53.468693249 +0000 UTC m=+9.272517885
	I1009 19:36:53.468776  463454 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b
	I1009 19:36:53.468793  463454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:36:54.320852  463454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b ...
	I1009 19:36:54.320874  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b: {Name:mk4f66d1d878208061ab842c22d612743a720728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:54.321036  463454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b ...
	I1009 19:36:54.321051  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b: {Name:mk710929fdd0ad36525027cd3c0e3a801b10547e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:54.321160  463454 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt.73a2860b -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt
	I1009 19:36:54.321311  463454 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key.73a2860b -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key
	W1009 19:36:54.321500  463454 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1009 19:36:54.321525  463454 certs.go:624] cert expired /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt: expiration: 2025-10-09 19:36:21 +0000 UTC, now: 2025-10-09 19:36:54.321519974 +0000 UTC m=+10.125344618
	I1009 19:36:54.321599  463454 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key
	I1009 19:36:54.321620  463454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt with IP's: []
	I1009 19:36:54.775243  463454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt ...
	I1009 19:36:54.775258  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt: {Name:mkcba953cce8f79d8e16d386042f318b0ba0fdbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:54.775402  463454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key ...
	I1009 19:36:54.775409  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key: {Name:mk6980b45bcb047c24048d689aa63f372d567a47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:54.775593  463454 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:36:54.775628  463454 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:36:54.775636  463454 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:36:54.775659  463454 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:36:54.775681  463454 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:36:54.775701  463454 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:36:54.775739  463454 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:36:54.776379  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:36:54.798947  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:36:54.826833  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:36:54.850529  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:36:54.874483  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 19:36:54.913139  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:36:54.943418  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:36:54.969529  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/cert-expiration-259172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:36:54.995508  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:36:55.024047  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:36:55.059134  463454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:36:55.086530  463454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:36:55.116503  463454 ssh_runner.go:195] Run: openssl version
	I1009 19:36:55.124205  463454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:36:55.135988  463454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:36:55.140284  463454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:36:55.140354  463454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:36:55.211303  463454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:36:55.219522  463454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:36:55.231693  463454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:36:55.236034  463454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:36:55.236094  463454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:36:55.282546  463454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:36:55.294555  463454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:36:55.304641  463454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:36:55.309458  463454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:36:55.309525  463454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:36:55.364970  463454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:36:55.374051  463454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:36:55.378695  463454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:36:55.432162  463454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:36:55.484155  463454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:36:55.535291  463454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:36:55.590540  463454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:36:55.638103  463454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:36:55.687682  463454 kubeadm.go:400] StartCluster: {Name:cert-expiration-259172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-259172 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:36:55.687759  463454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:36:55.687832  463454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:36:55.728064  463454 cri.go:89] found id: "d9ac5bbea4734d8818aab25b9117b926dac511121da97700001d45c68fdc73fd"
	I1009 19:36:55.728076  463454 cri.go:89] found id: "43e935c1753cabb4178e1b7d4c92f6c741d0544ae4e3c59d410fbc64b400ccdc"
	I1009 19:36:55.728079  463454 cri.go:89] found id: "73462439fb0a59d13eaaff02dca5eb7dfa706da37927b4558a98974c17cf74c2"
	I1009 19:36:55.728082  463454 cri.go:89] found id: "91774529e77c32361cb563512e6a069d98d180c29d7b4e0f1e653b044c79fe9f"
	I1009 19:36:55.728084  463454 cri.go:89] found id: "d0a2234662662960dc66782a8cd57439a673719620eb68fcc8cfc7107b6ea3b3"
	I1009 19:36:55.728087  463454 cri.go:89] found id: "9d302e6c07ee88318b88fff758d8911a7efe5f6dbf369d31f8ab2dc01cbcbb7b"
	I1009 19:36:55.728090  463454 cri.go:89] found id: "856eeddf6582d1f27213d933fa6b659319b3c1180d6e73c52fff7ed76be31f2d"
	I1009 19:36:55.728093  463454 cri.go:89] found id: "9eb6573817c19a54b1de10b0275697a454f98a6bc1d0b7f8700f38f63f7a38c2"
	I1009 19:36:55.728095  463454 cri.go:89] found id: "93559c38266d0a09e0bcedb513e34ba8814ac56bf6e59653da2254938f9bdf3b"
	I1009 19:36:55.728102  463454 cri.go:89] found id: "42170445a353179b6b39e18bc2e9212ec110fc2e0930fa23fef791eafee0d0e1"
	I1009 19:36:55.728104  463454 cri.go:89] found id: "d16339df10838d615a419461bbceddfbea7b28f84cc6dd6ee3f8f93d281fc9a7"
	I1009 19:36:55.728106  463454 cri.go:89] found id: "cc4f859336ee98381687674e9b72800eb7a556177fa1e110b995e21b7cd1c061"
	I1009 19:36:55.728108  463454 cri.go:89] found id: "1a35f1c7d26d6f473e8120a13d28dbe2a85f600493ac6ee3e1e9d5e80ca74bbc"
	I1009 19:36:55.728111  463454 cri.go:89] found id: "6ef32320808ac10b9f5c5627531d9c295b0442702772cf729c1118a05e5018d8"
	I1009 19:36:55.728113  463454 cri.go:89] found id: "d2b4d2d279703615abb922611b5f689f3e7ed2ee01217a87f5095e3dc2091631"
	I1009 19:36:55.728119  463454 cri.go:89] found id: "699ac010dc07c1cd3b2cea34a22559996ce34ac59365a4bc1a4dd3c52563df50"
	I1009 19:36:55.728122  463454 cri.go:89] found id: ""
	I1009 19:36:55.728172  463454 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:36:55.750765  463454 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:36:55Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:36:55.750832  463454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:36:55.765696  463454 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:36:55.765712  463454 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:36:55.765778  463454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:36:55.777033  463454 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:36:55.777785  463454 kubeconfig.go:125] found "cert-expiration-259172" server: "https://192.168.76.2:8443"
	I1009 19:36:55.779930  463454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:36:55.789752  463454 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 19:36:55.789786  463454 kubeadm.go:601] duration metric: took 24.069445ms to restartPrimaryControlPlane
	I1009 19:36:55.789795  463454 kubeadm.go:402] duration metric: took 102.124705ms to StartCluster
	I1009 19:36:55.789811  463454 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:55.789890  463454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:36:55.790915  463454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:36:55.791160  463454 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:36:55.791482  463454 config.go:182] Loaded profile config "cert-expiration-259172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:36:55.791525  463454 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:36:55.791630  463454 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-259172"
	I1009 19:36:55.791686  463454 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-259172"
	W1009 19:36:55.791692  463454 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:36:55.791712  463454 host.go:66] Checking if "cert-expiration-259172" exists ...
	I1009 19:36:55.792429  463454 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:36:55.792590  463454 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-259172"
	I1009 19:36:55.792606  463454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-259172"
	I1009 19:36:55.792934  463454 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:36:55.798470  463454 out.go:179] * Verifying Kubernetes components...
	I1009 19:36:55.801710  463454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:36:55.832008  463454 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:36:55.835980  463454 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:36:55.835992  463454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:36:55.836058  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:55.837293  463454 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-259172"
	W1009 19:36:55.837302  463454 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:36:55.837325  463454 host.go:66] Checking if "cert-expiration-259172" exists ...
	I1009 19:36:55.837728  463454 cli_runner.go:164] Run: docker container inspect cert-expiration-259172 --format={{.State.Status}}
	I1009 19:36:55.882679  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:55.906881  463454 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:36:55.906900  463454 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:36:55.906965  463454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-259172
	I1009 19:36:55.937485  463454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/cert-expiration-259172/id_rsa Username:docker}
	I1009 19:36:56.097743  463454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:36:56.119260  463454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:36:56.187760  463454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	
	
	==> CRI-O <==
	Oct 09 19:36:44 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:44.970663202Z" level=info msg="Created container 2334e8b944781c8be2644e31dd3d2c3cedd6471cc6f933fb4cb04c19c5d0a3d8: kube-system/coredns-5dd5756b68-ftv2x/coredns" id=9a81cfef-ed73-4fd1-8755-031748fc3ae7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:36:44 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:44.972326727Z" level=info msg="Starting container: 2334e8b944781c8be2644e31dd3d2c3cedd6471cc6f933fb4cb04c19c5d0a3d8" id=56ceab3d-18f5-49ff-a01a-2177579a21c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:36:44 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:44.974771611Z" level=info msg="Started container" PID=1956 containerID=2334e8b944781c8be2644e31dd3d2c3cedd6471cc6f933fb4cb04c19c5d0a3d8 description=kube-system/coredns-5dd5756b68-ftv2x/coredns id=56ceab3d-18f5-49ff-a01a-2177579a21c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a63f425ce3212a6ba79693e9f3f6657cef3c1cbcfaa213dbadb273ff546b504a
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.153518488Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3edc67c9-1db5-41c5-b7c9-7d81f2a7d35c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.15360406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.163226917Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f UID:76114c03-98f6-4ea8-a226-c9d7b7a2cb8c NetNS:/var/run/netns/0c011fe6-7be4-4217-9e3d-df65671f71b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079808}] Aliases:map[]}"
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.16327934Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.177604228Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f UID:76114c03-98f6-4ea8-a226-c9d7b7a2cb8c NetNS:/var/run/netns/0c011fe6-7be4-4217-9e3d-df65671f71b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079808}] Aliases:map[]}"
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.177750773Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.1823053Z" level=info msg="Ran pod sandbox aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f with infra container: default/busybox/POD" id=3edc67c9-1db5-41c5-b7c9-7d81f2a7d35c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.183581737Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=766e38b3-d5f0-4282-969f-f9d6c2d0b4e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.183708852Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=766e38b3-d5f0-4282-969f-f9d6c2d0b4e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.183753759Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=766e38b3-d5f0-4282-969f-f9d6c2d0b4e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.186360991Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d5eb57ee-a1dc-434d-8aba-ea9694c04bbd name=/runtime.v1.ImageService/PullImage
	Oct 09 19:36:48 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:48.188751983Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.11592733Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d5eb57ee-a1dc-434d-8aba-ea9694c04bbd name=/runtime.v1.ImageService/PullImage
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.11711451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33987fb3-bd0e-4d49-8629-a25ac97f53c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.119764311Z" level=info msg="Creating container: default/busybox/busybox" id=c8b40d24-1060-425a-b9ff-30cc9a58ddfb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.12067742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.125735559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.126304607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.142587113Z" level=info msg="Created container c58ab6affe1a8abf585dd65fff661d9ee081929dfa429153d0c755b4ade71ad5: default/busybox/busybox" id=c8b40d24-1060-425a-b9ff-30cc9a58ddfb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.143483631Z" level=info msg="Starting container: c58ab6affe1a8abf585dd65fff661d9ee081929dfa429153d0c755b4ade71ad5" id=1d2c86b8-76ff-46f8-a7c8-7d6d39e3ed5e name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:36:50 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:50.145112957Z" level=info msg="Started container" PID=2010 containerID=c58ab6affe1a8abf585dd65fff661d9ee081929dfa429153d0c755b4ade71ad5 description=default/busybox/busybox id=1d2c86b8-76ff-46f8-a7c8-7d6d39e3ed5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f
	Oct 09 19:36:58 old-k8s-version-271815 crio[837]: time="2025-10-09T19:36:58.094334336Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	c58ab6affe1a8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   10 seconds ago      Running             busybox                   0                   aca7c8da8bae2       busybox                                          default
	2334e8b944781       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      15 seconds ago      Running             coredns                   0                   a63f425ce3212       coredns-5dd5756b68-ftv2x                         kube-system
	31d289b528ea6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago      Running             storage-provisioner       0                   109195d686cd8       storage-provisioner                              kube-system
	565cd5cabffb5       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   4c1c8a54b394c       kindnet-t5pvl                                    kube-system
	d6a792be892bd       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      29 seconds ago      Running             kube-proxy                0                   89549bfe2d15c       kube-proxy-7j6jw                                 kube-system
	35f8d75b9fdda       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      50 seconds ago      Running             kube-scheduler            0                   dfdd3f8297070       kube-scheduler-old-k8s-version-271815            kube-system
	e6dad57dbbcac       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      50 seconds ago      Running             kube-apiserver            0                   9e8cc221ffd25       kube-apiserver-old-k8s-version-271815            kube-system
	5024e055997ae       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      50 seconds ago      Running             etcd                      0                   66295476c84f8       etcd-old-k8s-version-271815                      kube-system
	d6bcbea84a848       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      50 seconds ago      Running             kube-controller-manager   0                   7dac1e2036a65       kube-controller-manager-old-k8s-version-271815   kube-system
	
	
	==> coredns [2334e8b944781c8be2644e31dd3d2c3cedd6471cc6f933fb4cb04c19c5d0a3d8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49526 - 5756 "HINFO IN 7089653609762605464.5039744001412625011. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024332668s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-271815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-271815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=old-k8s-version-271815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_36_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:36:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-271815
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:36:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:36:47 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:36:47 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:36:47 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:36:47 +0000   Thu, 09 Oct 2025 19:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-271815
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6c539d734e247bd9ce5083b6fd3cfd3
	  System UUID:                1963e6d2-e326-4444-bd99-5534a70044a9
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-ftv2x                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-271815                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-t5pvl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-271815             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-271815    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-7j6jw                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-271815             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 44s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s   kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s   kubelet          Node old-k8s-version-271815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s   kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node old-k8s-version-271815 event: Registered Node old-k8s-version-271815 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-271815 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:01] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5024e055997aef2a925fbead2c105f5ea2903de24abc9d235049672fb1433adc] <==
	{"level":"info","ts":"2025-10-09T19:36:09.937836Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:36:09.937963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:36:09.935465Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T19:36:09.935514Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:36:09.938368Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:36:09.935671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-09T19:36:09.938506Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-09T19:36:10.906171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-09T19:36:10.906286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-09T19:36:10.906353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-09T19:36:10.906404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-09T19:36:10.90644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-09T19:36:10.90648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-09T19:36:10.906513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-09T19:36:10.911256Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:36:10.911419Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-271815 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T19:36:10.911454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:36:10.912745Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T19:36:10.934175Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:36:10.935235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-09T19:36:10.935454Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:36:10.935557Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:36:10.947231Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:36:10.945559Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T19:36:10.95019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:37:00 up  2:19,  0 user,  load average: 4.80, 2.20, 1.98
	Linux old-k8s-version-271815 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [565cd5cabffb5df59a238f3ddeb659c7313e5631df4f30ff606144b7e1fcfd66] <==
	I1009 19:36:33.923036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:36:33.923430       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:36:33.923616       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:36:33.923655       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:36:33.923698       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:36:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:36:34.215221       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:36:34.224585       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:36:34.227428       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:36:34.227608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 19:36:34.414257       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:36:34.414356       1 metrics.go:72] Registering metrics
	I1009 19:36:34.414458       1 controller.go:711] "Syncing nftables rules"
	I1009 19:36:44.130430       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:36:44.130486       1 main.go:301] handling current node
	I1009 19:36:54.134204       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:36:54.134310       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6dad57dbbcac78376f1a465b42410990eb306c1a6e63921bf621edc9df86ed5] <==
	I1009 19:36:13.568876       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:36:13.571331       1 controller.go:624] quota admission added evaluator for: namespaces
	I1009 19:36:13.571868       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1009 19:36:13.571895       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1009 19:36:13.572972       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 19:36:13.573039       1 aggregator.go:166] initial CRD sync complete...
	I1009 19:36:13.573049       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 19:36:13.573055       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:36:13.573061       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:36:13.768754       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:36:14.270934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 19:36:14.277679       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:36:14.277701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:36:14.927650       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:36:14.976430       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:36:15.100351       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:36:15.108835       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1009 19:36:15.110104       1 controller.go:624] quota admission added evaluator for: endpoints
	I1009 19:36:15.115990       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:36:15.486526       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 19:36:16.604963       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 19:36:16.624291       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:36:16.642978       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1009 19:36:29.332805       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1009 19:36:30.288973       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d6bcbea84a8480f20bbb42ba12be35f6e483f72be1d5755607b600057f7b6279] <==
	I1009 19:36:29.436637       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 19:36:29.476122       1 shared_informer.go:318] Caches are synced for cronjob
	I1009 19:36:29.483686       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 19:36:29.829800       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 19:36:29.850981       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 19:36:29.851016       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 19:36:30.238827       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-prcpl"
	I1009 19:36:30.248541       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ftv2x"
	I1009 19:36:30.261132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="923.359218ms"
	I1009 19:36:30.280890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.708305ms"
	I1009 19:36:30.313509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.567675ms"
	I1009 19:36:30.313602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.888µs"
	I1009 19:36:30.313794       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7j6jw"
	I1009 19:36:30.322425       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t5pvl"
	I1009 19:36:32.363855       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1009 19:36:32.400345       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-prcpl"
	I1009 19:36:32.415368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.224599ms"
	I1009 19:36:32.424197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.779811ms"
	I1009 19:36:32.424477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.586µs"
	I1009 19:36:44.535252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.254µs"
	I1009 19:36:44.573495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.118µs"
	I1009 19:36:45.141113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.967µs"
	I1009 19:36:46.044588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.63629ms"
	I1009 19:36:46.044927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.535µs"
	I1009 19:36:49.353309       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [d6a792be892bd9cea3ed936067fc1f9a81f1cb9b6d48ed3b0431147a5dccacf0] <==
	I1009 19:36:31.001573       1 server_others.go:69] "Using iptables proxy"
	I1009 19:36:31.026359       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1009 19:36:31.078943       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:36:31.084190       1 server_others.go:152] "Using iptables Proxier"
	I1009 19:36:31.084239       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 19:36:31.084251       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 19:36:31.084279       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 19:36:31.084507       1 server.go:846] "Version info" version="v1.28.0"
	I1009 19:36:31.084525       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:36:31.089015       1 config.go:315] "Starting node config controller"
	I1009 19:36:31.089032       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 19:36:31.089619       1 config.go:188] "Starting service config controller"
	I1009 19:36:31.089630       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 19:36:31.089647       1 config.go:97] "Starting endpoint slice config controller"
	I1009 19:36:31.089651       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 19:36:31.189936       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 19:36:31.189983       1 shared_informer.go:318] Caches are synced for node config
	I1009 19:36:31.189995       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [35f8d75b9fddad462dc74ad727cff87952f8a7d4c799aeb7c102fd45acb89e67] <==
	W1009 19:36:13.532198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:36:13.532425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1009 19:36:13.532250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 19:36:13.532488       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1009 19:36:13.532128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:36:13.532589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1009 19:36:14.380037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 19:36:14.380084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1009 19:36:14.385386       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 19:36:14.385492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 19:36:14.399692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 19:36:14.399756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1009 19:36:14.407030       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:36:14.407137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1009 19:36:14.482016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 19:36:14.482061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1009 19:36:14.582106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 19:36:14.582249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1009 19:36:14.592338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 19:36:14.592472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1009 19:36:14.676110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:36:14.676216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1009 19:36:14.805690       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 19:36:14.805732       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1009 19:36:17.416363       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423356    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8087fcd-ccb5-438a-8b76-034287b3cd28-kube-proxy\") pod \"kube-proxy-7j6jw\" (UID: \"f8087fcd-ccb5-438a-8b76-034287b3cd28\") " pod="kube-system/kube-proxy-7j6jw"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423413    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8087fcd-ccb5-438a-8b76-034287b3cd28-xtables-lock\") pod \"kube-proxy-7j6jw\" (UID: \"f8087fcd-ccb5-438a-8b76-034287b3cd28\") " pod="kube-system/kube-proxy-7j6jw"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423442    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgtjx\" (UniqueName: \"kubernetes.io/projected/f8087fcd-ccb5-438a-8b76-034287b3cd28-kube-api-access-jgtjx\") pod \"kube-proxy-7j6jw\" (UID: \"f8087fcd-ccb5-438a-8b76-034287b3cd28\") " pod="kube-system/kube-proxy-7j6jw"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423471    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cb7e417-e089-4f17-b9d6-9eb1ad6d968e-xtables-lock\") pod \"kindnet-t5pvl\" (UID: \"6cb7e417-e089-4f17-b9d6-9eb1ad6d968e\") " pod="kube-system/kindnet-t5pvl"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423499    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8087fcd-ccb5-438a-8b76-034287b3cd28-lib-modules\") pod \"kube-proxy-7j6jw\" (UID: \"f8087fcd-ccb5-438a-8b76-034287b3cd28\") " pod="kube-system/kube-proxy-7j6jw"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423525    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6cb7e417-e089-4f17-b9d6-9eb1ad6d968e-cni-cfg\") pod \"kindnet-t5pvl\" (UID: \"6cb7e417-e089-4f17-b9d6-9eb1ad6d968e\") " pod="kube-system/kindnet-t5pvl"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423547    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cb7e417-e089-4f17-b9d6-9eb1ad6d968e-lib-modules\") pod \"kindnet-t5pvl\" (UID: \"6cb7e417-e089-4f17-b9d6-9eb1ad6d968e\") " pod="kube-system/kindnet-t5pvl"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: I1009 19:36:30.423579    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9mh\" (UniqueName: \"kubernetes.io/projected/6cb7e417-e089-4f17-b9d6-9eb1ad6d968e-kube-api-access-gb9mh\") pod \"kindnet-t5pvl\" (UID: \"6cb7e417-e089-4f17-b9d6-9eb1ad6d968e\") " pod="kube-system/kindnet-t5pvl"
	Oct 09 19:36:30 old-k8s-version-271815 kubelet[1378]: W1009 19:36:30.711488    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-4c1c8a54b394cc71e2f8c253345d557ed9e67dec00270cc58d83ee823c2f6c90 WatchSource:0}: Error finding container 4c1c8a54b394cc71e2f8c253345d557ed9e67dec00270cc58d83ee823c2f6c90: Status 404 returned error can't find the container with id 4c1c8a54b394cc71e2f8c253345d557ed9e67dec00270cc58d83ee823c2f6c90
	Oct 09 19:36:33 old-k8s-version-271815 kubelet[1378]: I1009 19:36:33.937776    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7j6jw" podStartSLOduration=3.937731925 podCreationTimestamp="2025-10-09 19:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:36:30.934842964 +0000 UTC m=+14.362762962" watchObservedRunningTime="2025-10-09 19:36:33.937731925 +0000 UTC m=+17.365651932"
	Oct 09 19:36:36 old-k8s-version-271815 kubelet[1378]: I1009 19:36:36.740731    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-t5pvl" podStartSLOduration=3.592045396 podCreationTimestamp="2025-10-09 19:36:30 +0000 UTC" firstStartedPulling="2025-10-09 19:36:30.722934989 +0000 UTC m=+14.150854988" lastFinishedPulling="2025-10-09 19:36:33.871578168 +0000 UTC m=+17.299498167" observedRunningTime="2025-10-09 19:36:33.939066052 +0000 UTC m=+17.366986059" watchObservedRunningTime="2025-10-09 19:36:36.740688575 +0000 UTC m=+20.168608582"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.482214    1378 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.530239    1378 topology_manager.go:215] "Topology Admit Handler" podUID="dc6318da-ce5f-4d30-9999-62b2f083b2da" podNamespace="kube-system" podName="coredns-5dd5756b68-ftv2x"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.539384    1378 topology_manager.go:215] "Topology Admit Handler" podUID="f5406654-bb8c-49c3-a7a4-e3a13517e0e2" podNamespace="kube-system" podName="storage-provisioner"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.627497    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdg65\" (UniqueName: \"kubernetes.io/projected/dc6318da-ce5f-4d30-9999-62b2f083b2da-kube-api-access-wdg65\") pod \"coredns-5dd5756b68-ftv2x\" (UID: \"dc6318da-ce5f-4d30-9999-62b2f083b2da\") " pod="kube-system/coredns-5dd5756b68-ftv2x"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.627550    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc6318da-ce5f-4d30-9999-62b2f083b2da-config-volume\") pod \"coredns-5dd5756b68-ftv2x\" (UID: \"dc6318da-ce5f-4d30-9999-62b2f083b2da\") " pod="kube-system/coredns-5dd5756b68-ftv2x"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.627578    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pnx\" (UniqueName: \"kubernetes.io/projected/f5406654-bb8c-49c3-a7a4-e3a13517e0e2-kube-api-access-p7pnx\") pod \"storage-provisioner\" (UID: \"f5406654-bb8c-49c3-a7a4-e3a13517e0e2\") " pod="kube-system/storage-provisioner"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: I1009 19:36:44.627602    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f5406654-bb8c-49c3-a7a4-e3a13517e0e2-tmp\") pod \"storage-provisioner\" (UID: \"f5406654-bb8c-49c3-a7a4-e3a13517e0e2\") " pod="kube-system/storage-provisioner"
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: W1009 19:36:44.851597    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-109195d686cd821400bf3678a1379205942ea5c752d2b2871075ec93ce1cef04 WatchSource:0}: Error finding container 109195d686cd821400bf3678a1379205942ea5c752d2b2871075ec93ce1cef04: Status 404 returned error can't find the container with id 109195d686cd821400bf3678a1379205942ea5c752d2b2871075ec93ce1cef04
	Oct 09 19:36:44 old-k8s-version-271815 kubelet[1378]: W1009 19:36:44.893448    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-a63f425ce3212a6ba79693e9f3f6657cef3c1cbcfaa213dbadb273ff546b504a WatchSource:0}: Error finding container a63f425ce3212a6ba79693e9f3f6657cef3c1cbcfaa213dbadb273ff546b504a: Status 404 returned error can't find the container with id a63f425ce3212a6ba79693e9f3f6657cef3c1cbcfaa213dbadb273ff546b504a
	Oct 09 19:36:45 old-k8s-version-271815 kubelet[1378]: I1009 19:36:45.056108    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.056056932 podCreationTimestamp="2025-10-09 19:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:36:45.054176731 +0000 UTC m=+28.482096771" watchObservedRunningTime="2025-10-09 19:36:45.056056932 +0000 UTC m=+28.483976939"
	Oct 09 19:36:46 old-k8s-version-271815 kubelet[1378]: I1009 19:36:46.026863    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ftv2x" podStartSLOduration=16.026820007 podCreationTimestamp="2025-10-09 19:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:36:45.136997847 +0000 UTC m=+28.564917854" watchObservedRunningTime="2025-10-09 19:36:46.026820007 +0000 UTC m=+29.454740014"
	Oct 09 19:36:47 old-k8s-version-271815 kubelet[1378]: I1009 19:36:47.852028    1378 topology_manager.go:215] "Topology Admit Handler" podUID="76114c03-98f6-4ea8-a226-c9d7b7a2cb8c" podNamespace="default" podName="busybox"
	Oct 09 19:36:47 old-k8s-version-271815 kubelet[1378]: I1009 19:36:47.945933    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2f4z\" (UniqueName: \"kubernetes.io/projected/76114c03-98f6-4ea8-a226-c9d7b7a2cb8c-kube-api-access-c2f4z\") pod \"busybox\" (UID: \"76114c03-98f6-4ea8-a226-c9d7b7a2cb8c\") " pod="default/busybox"
	Oct 09 19:36:48 old-k8s-version-271815 kubelet[1378]: W1009 19:36:48.181676    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f WatchSource:0}: Error finding container aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f: Status 404 returned error can't find the container with id aca7c8da8bae246bdcf2ca1ea349a743167170627ce81b7299235605fa834b6f
	
	
	==> storage-provisioner [31d289b528ea612e85b4fb27aa57ebaae2f9a40492150ebbfbca4f1891ecebb4] <==
	I1009 19:36:44.936036       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:36:44.987241       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:36:44.987287       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 19:36:45.081740       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:36:45.081958       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-271815_dab1578d-4013-4d90-8b84-b9f8ef84f684!
	I1009 19:36:45.083118       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89b8e788-298a-4512-8566-f2088b6d05b0", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-271815_dab1578d-4013-4d90-8b84-b9f8ef84f684 became leader
	I1009 19:36:45.282343       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-271815_dab1578d-4013-4d90-8b84-b9f8ef84f684!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-271815 -n old-k8s-version-271815
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-271815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-271815 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-271815 --alsologtostderr -v=1: exit status 80 (1.795983996s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-271815 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:24.644327  471179 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:24.644444  471179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:24.644449  471179 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:24.644453  471179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:24.644813  471179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:38:24.645188  471179 out.go:368] Setting JSON to false
	I1009 19:38:24.645209  471179 mustload.go:65] Loading cluster: old-k8s-version-271815
	I1009 19:38:24.645860  471179 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:38:24.646617  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:38:24.665654  471179 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:38:24.665964  471179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:24.723756  471179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:38:24.714200243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:24.724502  471179 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-271815 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 19:38:24.728059  471179 out.go:179] * Pausing node old-k8s-version-271815 ... 
	I1009 19:38:24.731167  471179 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:38:24.731529  471179 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:24.731579  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:38:24.748071  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:38:24.852669  471179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:24.865405  471179 pause.go:52] kubelet running: true
	I1009 19:38:24.865470  471179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:38:25.066406  471179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:38:25.066493  471179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:38:25.152140  471179 cri.go:89] found id: "e3f12c8476c7f675ccdaedbeec3d39577c01787f3e043afaf02faad8eef8a730"
	I1009 19:38:25.152163  471179 cri.go:89] found id: "10744f141a4b0dfd34d28c3a32335c0845b684257b88d74b758a7fe58035975e"
	I1009 19:38:25.152169  471179 cri.go:89] found id: "a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029"
	I1009 19:38:25.152173  471179 cri.go:89] found id: "bac73e47100848955f3f3f4f9b77a47feeb98dc2c3bf4b8a567178090f45a220"
	I1009 19:38:25.152176  471179 cri.go:89] found id: "ab150203b138e1f09f9d40149a76bfda555618d4861f5b1ecee8f410e751492a"
	I1009 19:38:25.152185  471179 cri.go:89] found id: "269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7"
	I1009 19:38:25.152189  471179 cri.go:89] found id: "f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d"
	I1009 19:38:25.152193  471179 cri.go:89] found id: "e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f"
	I1009 19:38:25.152196  471179 cri.go:89] found id: "5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c"
	I1009 19:38:25.152202  471179 cri.go:89] found id: "6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	I1009 19:38:25.152205  471179 cri.go:89] found id: "2d63418e6ce2f77eca69e74ff9ea7e78acc2dc61f5289982a96c1ac9c78d7392"
	I1009 19:38:25.152208  471179 cri.go:89] found id: ""
	I1009 19:38:25.152262  471179 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:38:25.169649  471179 retry.go:31] will retry after 201.654758ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:25Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:38:25.372132  471179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:25.385471  471179 pause.go:52] kubelet running: false
	I1009 19:38:25.385539  471179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:38:25.567469  471179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:38:25.567545  471179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:38:25.666587  471179 cri.go:89] found id: "e3f12c8476c7f675ccdaedbeec3d39577c01787f3e043afaf02faad8eef8a730"
	I1009 19:38:25.666607  471179 cri.go:89] found id: "10744f141a4b0dfd34d28c3a32335c0845b684257b88d74b758a7fe58035975e"
	I1009 19:38:25.666612  471179 cri.go:89] found id: "a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029"
	I1009 19:38:25.666616  471179 cri.go:89] found id: "bac73e47100848955f3f3f4f9b77a47feeb98dc2c3bf4b8a567178090f45a220"
	I1009 19:38:25.666619  471179 cri.go:89] found id: "ab150203b138e1f09f9d40149a76bfda555618d4861f5b1ecee8f410e751492a"
	I1009 19:38:25.666623  471179 cri.go:89] found id: "269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7"
	I1009 19:38:25.666627  471179 cri.go:89] found id: "f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d"
	I1009 19:38:25.666630  471179 cri.go:89] found id: "e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f"
	I1009 19:38:25.666633  471179 cri.go:89] found id: "5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c"
	I1009 19:38:25.666639  471179 cri.go:89] found id: "6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	I1009 19:38:25.666642  471179 cri.go:89] found id: "2d63418e6ce2f77eca69e74ff9ea7e78acc2dc61f5289982a96c1ac9c78d7392"
	I1009 19:38:25.666646  471179 cri.go:89] found id: ""
	I1009 19:38:25.666691  471179 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:38:25.684052  471179 retry.go:31] will retry after 353.435163ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:25Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:38:26.038726  471179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:26.052642  471179 pause.go:52] kubelet running: false
	I1009 19:38:26.052729  471179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:38:26.267715  471179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:38:26.267814  471179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:38:26.347141  471179 cri.go:89] found id: "e3f12c8476c7f675ccdaedbeec3d39577c01787f3e043afaf02faad8eef8a730"
	I1009 19:38:26.347164  471179 cri.go:89] found id: "10744f141a4b0dfd34d28c3a32335c0845b684257b88d74b758a7fe58035975e"
	I1009 19:38:26.347170  471179 cri.go:89] found id: "a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029"
	I1009 19:38:26.347174  471179 cri.go:89] found id: "bac73e47100848955f3f3f4f9b77a47feeb98dc2c3bf4b8a567178090f45a220"
	I1009 19:38:26.347177  471179 cri.go:89] found id: "ab150203b138e1f09f9d40149a76bfda555618d4861f5b1ecee8f410e751492a"
	I1009 19:38:26.347184  471179 cri.go:89] found id: "269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7"
	I1009 19:38:26.347188  471179 cri.go:89] found id: "f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d"
	I1009 19:38:26.347191  471179 cri.go:89] found id: "e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f"
	I1009 19:38:26.347214  471179 cri.go:89] found id: "5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c"
	I1009 19:38:26.347228  471179 cri.go:89] found id: "6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	I1009 19:38:26.347232  471179 cri.go:89] found id: "2d63418e6ce2f77eca69e74ff9ea7e78acc2dc61f5289982a96c1ac9c78d7392"
	I1009 19:38:26.347235  471179 cri.go:89] found id: ""
	I1009 19:38:26.347292  471179 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:38:26.361677  471179 out.go:203] 
	W1009 19:38:26.364689  471179 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:38:26.364713  471179 out.go:285] * 
	* 
	W1009 19:38:26.371969  471179 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:38:26.374948  471179 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-271815 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-271815
helpers_test.go:243: (dbg) docker inspect old-k8s-version-271815:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980",
	        "Created": "2025-10-09T19:35:50.362074272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 467432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:37:14.685804818Z",
	            "FinishedAt": "2025-10-09T19:37:13.667247119Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/hostname",
	        "HostsPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/hosts",
	        "LogPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980-json.log",
	        "Name": "/old-k8s-version-271815",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-271815:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-271815",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980",
	                "LowerDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-271815",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-271815/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-271815",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-271815",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-271815",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c654eb20238f6304afc2c1634891d448b2c00dd384949552091221bcf1a44cc3",
	            "SandboxKey": "/var/run/docker/netns/c654eb20238f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-271815": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:fc:97:5d:a5:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9b70e298602c2790a8fd04817e351fbf4f06c3fbce53648b556f8d8aa63fa4cc",
	                    "EndpointID": "231032790e32f835603f3b747b9a1e82fec228c6e57bccfed131ab97aba8ba39",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-271815",
	                        "395bb50f3c39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815: exit status 2 (365.276432ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-271815 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-271815 logs -n 25: (1.328010119s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-224541 sudo containerd config dump                                                                                                                                                                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo crio config                                                                                                                                                                                                             │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ delete  │ -p cilium-224541                                                                                                                                                                                                                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ start   │ -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ force-systemd-flag-476949 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-flag-476949                                                                                                                                                                                                                  │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │ 09 Oct 25 19:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-271815 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:37:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:37:14.325897  467248 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:14.326103  467248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:14.326116  467248 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:14.326122  467248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:14.326445  467248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:37:14.326897  467248 out.go:368] Setting JSON to false
	I1009 19:37:14.328040  467248 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8386,"bootTime":1760030249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:37:14.328114  467248 start.go:141] virtualization:  
	I1009 19:37:14.330967  467248 out.go:179] * [old-k8s-version-271815] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:37:14.334806  467248 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:37:14.334939  467248 notify.go:220] Checking for updates...
	I1009 19:37:14.340759  467248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:37:14.343750  467248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:37:14.346610  467248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:37:14.349565  467248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:37:14.352656  467248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:37:14.356169  467248 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:37:14.359736  467248 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1009 19:37:14.362718  467248 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:37:14.388788  467248 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:37:14.388896  467248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:14.455661  467248 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-09 19:37:14.445631786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:37:14.455769  467248 docker.go:318] overlay module found
	I1009 19:37:14.459065  467248 out.go:179] * Using the docker driver based on existing profile
	I1009 19:37:14.462099  467248 start.go:305] selected driver: docker
	I1009 19:37:14.462118  467248 start.go:925] validating driver "docker" against &{Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:14.462266  467248 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:37:14.462978  467248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:14.556008  467248 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-09 19:37:14.540980075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:37:14.556370  467248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:37:14.556391  467248 cni.go:84] Creating CNI manager for ""
	I1009 19:37:14.556470  467248 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:37:14.556521  467248 start.go:349] cluster config:
	{Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:14.559745  467248 out.go:179] * Starting "old-k8s-version-271815" primary control-plane node in "old-k8s-version-271815" cluster
	I1009 19:37:14.562625  467248 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:37:14.565679  467248 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:37:14.568547  467248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 19:37:14.568610  467248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 19:37:14.568624  467248 cache.go:64] Caching tarball of preloaded images
	I1009 19:37:14.568733  467248 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:37:14.568751  467248 cache.go:67] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 19:37:14.568884  467248 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/config.json ...
	I1009 19:37:14.569168  467248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:37:14.601996  467248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:37:14.602022  467248 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:37:14.602036  467248 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:37:14.602058  467248 start.go:360] acquireMachinesLock for old-k8s-version-271815: {Name:mk2253e3ad61415788b159368a95085c5f2eeced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:37:14.602190  467248 start.go:364] duration metric: took 93.466µs to acquireMachinesLock for "old-k8s-version-271815"
	I1009 19:37:14.602229  467248 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:37:14.602242  467248 fix.go:54] fixHost starting: 
	I1009 19:37:14.602567  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:14.634425  467248 fix.go:112] recreateIfNeeded on old-k8s-version-271815: state=Stopped err=<nil>
	W1009 19:37:14.634455  467248 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:37:14.190278  465613 cli_runner.go:164] Run: docker network inspect no-preload-678119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:37:14.208218  465613 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:37:14.213450  465613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:14.224915  465613 kubeadm.go:883] updating cluster {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:37:14.225020  465613 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:37:14.225065  465613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:37:14.259991  465613 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1009 19:37:14.260021  465613 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 19:37:14.260062  465613 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:14.260271  465613 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.260378  465613 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.260493  465613 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.260593  465613 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.260694  465613 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1009 19:37:14.260784  465613 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.260879  465613 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.263119  465613 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.263132  465613 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.263205  465613 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:14.263253  465613 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.263402  465613 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.263407  465613 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.263462  465613 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1009 19:37:14.263533  465613 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.504806  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.519360  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.530971  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1009 19:37:14.532534  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.534432  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.535095  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.544329  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.636645  465613 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1009 19:37:14.636696  465613 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.636769  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.709908  465613 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1009 19:37:14.709960  465613 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.710015  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.778407  465613 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1009 19:37:14.778453  465613 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.778519  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.778612  465613 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1009 19:37:14.778643  465613 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1009 19:37:14.778675  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809099  465613 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1009 19:37:14.809147  465613 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.809198  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809270  465613 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1009 19:37:14.809292  465613 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.809319  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809398  465613 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1009 19:37:14.809420  465613 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.809447  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809522  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.809614  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.809692  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.809805  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1009 19:37:14.907893  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.908054  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.908102  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.908205  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.908257  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.908302  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.908329  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1009 19:37:14.637749  467248 out.go:252] * Restarting existing docker container for "old-k8s-version-271815" ...
	I1009 19:37:14.637841  467248 cli_runner.go:164] Run: docker start old-k8s-version-271815
	I1009 19:37:15.051640  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:15.094311  467248 kic.go:430] container "old-k8s-version-271815" state is running.
	I1009 19:37:15.094704  467248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-271815
	I1009 19:37:15.122989  467248 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/config.json ...
	I1009 19:37:15.123238  467248 machine.go:93] provisionDockerMachine start ...
	I1009 19:37:15.123310  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:15.157785  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:15.158110  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:15.158119  467248 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:37:15.159130  467248 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41414->127.0.0.1:33430: read: connection reset by peer
	I1009 19:37:18.326015  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-271815
	
	I1009 19:37:18.326047  467248 ubuntu.go:182] provisioning hostname "old-k8s-version-271815"
	I1009 19:37:18.326108  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:18.350364  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:18.350673  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:18.350691  467248 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-271815 && echo "old-k8s-version-271815" | sudo tee /etc/hostname
	I1009 19:37:18.517150  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-271815
	
	I1009 19:37:18.517309  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:18.550960  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:18.551338  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:18.551357  467248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-271815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-271815/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-271815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:37:18.706474  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:37:18.706551  467248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:37:18.706588  467248 ubuntu.go:190] setting up certificates
	I1009 19:37:18.706622  467248 provision.go:84] configureAuth start
	I1009 19:37:18.706709  467248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-271815
	I1009 19:37:18.728701  467248 provision.go:143] copyHostCerts
	I1009 19:37:18.728766  467248 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:37:18.728783  467248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:37:18.728862  467248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:37:18.728962  467248 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:37:18.728968  467248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:37:18.728994  467248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:37:18.729045  467248 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:37:18.729049  467248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:37:18.729071  467248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:37:18.729115  467248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-271815 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-271815]
	I1009 19:37:19.031582  467248 provision.go:177] copyRemoteCerts
	I1009 19:37:19.031704  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:37:19.031762  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.049070  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:19.156033  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:37:19.180905  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:37:19.208610  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:37:19.236320  467248 provision.go:87] duration metric: took 529.656078ms to configureAuth
	I1009 19:37:19.236389  467248 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:37:19.236605  467248 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:37:19.236751  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.267911  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:19.268214  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:19.268228  467248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:37:15.148039  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:15.148114  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:15.148163  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:15.148209  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:15.148264  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:15.148304  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:15.148351  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1009 19:37:15.344043  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1009 19:37:15.344146  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1009 19:37:15.344210  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1009 19:37:15.344260  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1009 19:37:15.344321  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:15.344376  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:15.344431  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:15.344494  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1009 19:37:15.344547  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1009 19:37:15.344599  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1009 19:37:15.344647  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1009 19:37:15.434026  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1009 19:37:15.434311  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1009 19:37:15.434114  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1009 19:37:15.434435  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1009 19:37:15.434149  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1009 19:37:15.434546  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1009 19:37:15.434218  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1009 19:37:15.434704  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1009 19:37:15.434251  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1009 19:37:15.434834  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1009 19:37:15.434271  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1009 19:37:15.434923  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1009 19:37:15.434181  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1009 19:37:15.435045  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	W1009 19:37:15.520388  465613 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 19:37:15.520572  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:15.527020  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1009 19:37:15.527062  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1009 19:37:15.527140  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1009 19:37:15.527206  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1009 19:37:15.527220  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1009 19:37:15.527309  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1009 19:37:15.561202  465613 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1009 19:37:15.561333  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1009 19:37:15.581082  465613 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1009 19:37:15.581166  465613 retry.go:31] will retry after 134.152426ms: ssh: rejected: connect failed (open failed)
	I1009 19:37:15.716209  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:37:15.744824  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:37:15.803237  465613 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 19:37:15.803327  465613 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:15.803400  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:15.803483  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:37:15.847104  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:37:16.133326  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:16.133504  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1009 19:37:16.188192  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1009 19:37:16.188326  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1009 19:37:16.268409  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:18.210811  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.02243924s)
	I1009 19:37:18.210837  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1009 19:37:18.210857  465613 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1009 19:37:18.210855  465613 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942414218s)
	I1009 19:37:18.210904  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1009 19:37:18.210917  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:19.677765  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:37:19.677791  467248 machine.go:96] duration metric: took 4.554542396s to provisionDockerMachine
	I1009 19:37:19.677803  467248 start.go:293] postStartSetup for "old-k8s-version-271815" (driver="docker")
	I1009 19:37:19.677830  467248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:37:19.677909  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:37:19.677956  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.703348  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:19.811094  467248 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:37:19.814920  467248 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:37:19.814992  467248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:37:19.815018  467248 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:37:19.815097  467248 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:37:19.815217  467248 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:37:19.815355  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:37:19.823229  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:37:19.845761  467248 start.go:296] duration metric: took 167.942931ms for postStartSetup
	I1009 19:37:19.845896  467248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:19.845978  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.863767  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:19.967489  467248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:37:19.975594  467248 fix.go:56] duration metric: took 5.373344999s for fixHost
	I1009 19:37:19.975668  467248 start.go:83] releasing machines lock for "old-k8s-version-271815", held for 5.373458156s
	I1009 19:37:19.975762  467248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-271815
	I1009 19:37:19.993655  467248 ssh_runner.go:195] Run: cat /version.json
	I1009 19:37:19.993694  467248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:37:19.993715  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.993748  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:20.031004  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:20.037687  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:20.154873  467248 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:20.246813  467248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:37:20.300731  467248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:37:20.305744  467248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:37:20.305846  467248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:37:20.314753  467248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:37:20.314779  467248 start.go:495] detecting cgroup driver to use...
	I1009 19:37:20.314842  467248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:37:20.314910  467248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:37:20.331003  467248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:37:20.346312  467248 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:37:20.346405  467248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:37:20.363374  467248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:37:20.377809  467248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:37:20.510355  467248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:37:20.644006  467248 docker.go:234] disabling docker service ...
	I1009 19:37:20.644130  467248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:37:20.659336  467248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:37:20.674619  467248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:37:20.829171  467248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:37:20.981224  467248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:37:20.996513  467248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:37:21.014065  467248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 19:37:21.014219  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.024035  467248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:37:21.024169  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.033896  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.043737  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.053477  467248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:37:21.062420  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.080946  467248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.091594  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.107967  467248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:37:21.120465  467248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:37:21.131337  467248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:21.335871  467248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:37:21.890428  467248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:37:21.890545  467248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:37:21.896041  467248 start.go:563] Will wait 60s for crictl version
	I1009 19:37:21.896218  467248 ssh_runner.go:195] Run: which crictl
	I1009 19:37:21.900948  467248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:37:21.931711  467248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:37:21.931848  467248 ssh_runner.go:195] Run: crio --version
	I1009 19:37:21.967583  467248 ssh_runner.go:195] Run: crio --version
	I1009 19:37:22.005968  467248 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1009 19:37:22.009290  467248 cli_runner.go:164] Run: docker network inspect old-k8s-version-271815 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:37:22.031895  467248 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:37:22.036217  467248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:22.047018  467248 kubeadm.go:883] updating cluster {Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:37:22.047139  467248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 19:37:22.047191  467248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:37:22.093550  467248 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:37:22.093570  467248 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:37:22.093655  467248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:37:22.123246  467248 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:37:22.123266  467248 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:37:22.123273  467248 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1009 19:37:22.123375  467248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-271815 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:37:22.123452  467248 ssh_runner.go:195] Run: crio config
	I1009 19:37:22.202258  467248 cni.go:84] Creating CNI manager for ""
	I1009 19:37:22.202328  467248 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:37:22.202362  467248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:37:22.202411  467248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-271815 NodeName:old-k8s-version-271815 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:37:22.202654  467248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-271815"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:37:22.202753  467248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1009 19:37:22.214785  467248 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:37:22.214910  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:37:22.227417  467248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1009 19:37:22.261590  467248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:37:22.284664  467248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1009 19:37:22.308919  467248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:37:22.313040  467248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:22.323990  467248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:22.499417  467248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:37:22.515894  467248 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815 for IP: 192.168.85.2
	I1009 19:37:22.515964  467248 certs.go:195] generating shared ca certs ...
	I1009 19:37:22.516017  467248 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:22.516194  467248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:37:22.516271  467248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:37:22.516293  467248 certs.go:257] generating profile certs ...
	I1009 19:37:22.516426  467248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.key
	I1009 19:37:22.516540  467248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/apiserver.key.008660bc
	I1009 19:37:22.516629  467248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/proxy-client.key
	I1009 19:37:22.516772  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:37:22.516848  467248 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:37:22.516874  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:37:22.516939  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:37:22.516992  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:37:22.517044  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:37:22.517122  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:37:22.517893  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:37:22.545005  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:37:22.577660  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:37:22.628816  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:37:22.692539  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 19:37:22.742760  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:37:22.803320  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:37:22.824909  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:37:22.844559  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:37:22.864083  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:37:22.883037  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:37:22.903175  467248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:37:22.917466  467248 ssh_runner.go:195] Run: openssl version
	I1009 19:37:22.923985  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:37:22.933031  467248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:37:22.937819  467248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:37:22.937950  467248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:37:22.979535  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:37:22.988705  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:37:22.997937  467248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:37:23.002756  467248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:37:23.002904  467248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:37:23.050997  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:37:23.059681  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:37:23.069113  467248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:23.073488  467248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:23.073632  467248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:23.119129  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:37:23.127870  467248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:37:23.132422  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:37:23.174423  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:37:23.235524  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:37:23.375598  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:37:23.466701  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:37:23.557842  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:37:23.698162  467248 kubeadm.go:400] StartCluster: {Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:23.698307  467248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:37:23.698410  467248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:37:23.790992  467248 cri.go:89] found id: "269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7"
	I1009 19:37:23.791062  467248 cri.go:89] found id: "f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d"
	I1009 19:37:23.791083  467248 cri.go:89] found id: "e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f"
	I1009 19:37:23.791117  467248 cri.go:89] found id: "5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c"
	I1009 19:37:23.791153  467248 cri.go:89] found id: ""
	I1009 19:37:23.791238  467248 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:37:23.809089  467248 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:37:23Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:37:23.809244  467248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:37:23.819707  467248 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:37:23.819765  467248 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:37:23.819844  467248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:37:23.838601  467248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:23.839122  467248 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-271815" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:37:23.839288  467248 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-271815" cluster setting kubeconfig missing "old-k8s-version-271815" context setting]
	I1009 19:37:23.839663  467248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:23.841324  467248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:37:23.872049  467248 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:37:23.872129  467248 kubeadm.go:601] duration metric: took 52.33549ms to restartPrimaryControlPlane
	I1009 19:37:23.872154  467248 kubeadm.go:402] duration metric: took 174.001979ms to StartCluster
	I1009 19:37:23.872193  467248 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:23.872276  467248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:37:23.873008  467248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:23.873284  467248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:37:23.873674  467248 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:37:23.873758  467248 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-271815"
	I1009 19:37:23.873770  467248 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-271815"
	W1009 19:37:23.873777  467248 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:37:23.873797  467248 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:37:23.874294  467248 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:37:23.874356  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.874376  467248 addons.go:69] Setting dashboard=true in profile "old-k8s-version-271815"
	I1009 19:37:23.874387  467248 addons.go:238] Setting addon dashboard=true in "old-k8s-version-271815"
	W1009 19:37:23.874393  467248 addons.go:247] addon dashboard should already be in state true
	I1009 19:37:23.874412  467248 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:37:23.874883  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.875125  467248 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-271815"
	I1009 19:37:23.875148  467248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-271815"
	I1009 19:37:23.875399  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.890500  467248 out.go:179] * Verifying Kubernetes components...
	I1009 19:37:23.898638  467248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:23.919819  467248 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-271815"
	W1009 19:37:23.919842  467248 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:37:23.919865  467248 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:37:23.920279  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.937306  467248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:23.940213  467248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:37:23.940258  467248 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:37:23.940282  467248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:37:23.940351  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:23.949169  467248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:37:23.952297  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:37:23.952327  467248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:37:23.952400  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:23.990006  467248 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:37:23.990027  467248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:37:23.990092  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:24.010574  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:24.030082  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:24.039394  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:24.273596  467248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:37:20.458116  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.247191413s)
	I1009 19:37:20.458159  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1009 19:37:20.458167  465613 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.247230798s)
	I1009 19:37:20.458178  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1009 19:37:20.458204  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 19:37:20.458226  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1009 19:37:20.458279  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 19:37:22.174925  465613 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.716622932s)
	I1009 19:37:22.174957  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1009 19:37:22.174983  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1009 19:37:22.175057  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.71682094s)
	I1009 19:37:22.175071  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1009 19:37:22.175091  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1009 19:37:22.175136  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1009 19:37:24.120444  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.945271589s)
	I1009 19:37:24.120472  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1009 19:37:24.120490  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1009 19:37:24.120539  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1009 19:37:24.328918  467248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:37:24.385773  467248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:37:24.394501  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:37:24.394526  467248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:37:24.583927  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:37:24.584002  467248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:37:24.739374  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:37:24.739445  467248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:37:24.788870  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:37:24.788929  467248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:37:24.844579  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:37:24.844652  467248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:37:24.875399  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:37:24.875475  467248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:37:24.917242  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:37:24.917318  467248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:37:24.956335  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:37:24.956412  467248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:37:24.993068  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:37:24.993140  467248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:37:25.032742  467248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:37:26.221503  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.100937939s)
	I1009 19:37:26.221532  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1009 19:37:26.221550  465613 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1009 19:37:26.221599  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1009 19:37:33.605534  467248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.331894193s)
	I1009 19:37:33.605579  467248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.276586847s)
	I1009 19:37:33.605589  467248 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.219742425s)
	I1009 19:37:33.605617  467248 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-271815" to be "Ready" ...
	I1009 19:37:33.675128  467248 node_ready.go:49] node "old-k8s-version-271815" is "Ready"
	I1009 19:37:33.675206  467248 node_ready.go:38] duration metric: took 69.560905ms for node "old-k8s-version-271815" to be "Ready" ...
	I1009 19:37:33.675233  467248 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:37:33.675317  467248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:37:34.778899  467248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.746066492s)
	I1009 19:37:34.779094  467248 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.10373927s)
	I1009 19:37:34.779113  467248 api_server.go:72] duration metric: took 10.905777758s to wait for apiserver process to appear ...
	I1009 19:37:34.779119  467248 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:37:34.779146  467248 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:37:34.780697  467248 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-271815 addons enable metrics-server
	
	I1009 19:37:34.781832  467248 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 19:37:31.295176  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.073551616s)
	I1009 19:37:31.295200  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1009 19:37:31.295216  465613 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 19:37:31.295264  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 19:37:32.247548  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 19:37:32.247580  465613 cache_images.go:124] Successfully loaded all cached images
	I1009 19:37:32.247586  465613 cache_images.go:93] duration metric: took 17.987553221s to LoadCachedImages
	I1009 19:37:32.247598  465613 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:37:32.247683  465613 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-678119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:37:32.247756  465613 ssh_runner.go:195] Run: crio config
	I1009 19:37:32.336695  465613 cni.go:84] Creating CNI manager for ""
	I1009 19:37:32.336773  465613 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:37:32.336815  465613 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:37:32.336865  465613 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-678119 NodeName:no-preload-678119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:37:32.337024  465613 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-678119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:37:32.337130  465613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:37:32.345919  465613 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1009 19:37:32.346037  465613 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1009 19:37:32.356268  465613 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1009 19:37:32.356454  465613 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1009 19:37:32.356856  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1009 19:37:32.356591  465613 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1009 19:37:32.362025  465613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1009 19:37:32.362060  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1009 19:37:33.537315  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1009 19:37:33.563720  465613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:33.572252  465613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1009 19:37:33.572285  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1009 19:37:33.671513  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1009 19:37:33.694805  465613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1009 19:37:33.694891  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1009 19:37:34.544351  465613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:37:34.560187  465613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:37:34.575088  465613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:37:34.609240  465613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 19:37:34.622837  465613 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:37:34.631904  465613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:34.644100  465613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:34.830738  465613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:37:34.848767  465613 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119 for IP: 192.168.76.2
	I1009 19:37:34.848832  465613 certs.go:195] generating shared ca certs ...
	I1009 19:37:34.848861  465613 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:34.849039  465613 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:37:34.849108  465613 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:37:34.849149  465613 certs.go:257] generating profile certs ...
	I1009 19:37:34.849239  465613 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key
	I1009 19:37:34.849271  465613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt with IP's: []
	I1009 19:37:35.295871  465613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt ...
	I1009 19:37:35.295897  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: {Name:mk71dd1c30258f0b4095df2035cb942a2d8d57c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:35.296096  465613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key ...
	I1009 19:37:35.296105  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key: {Name:mk721a7d11722f195a4be7c6b4dc0780379708f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:35.296183  465613 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7
	I1009 19:37:35.296199  465613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:37:36.056367  465613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7 ...
	I1009 19:37:36.056413  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7: {Name:mk88e7065aaa99e71eda962289cb921a85a5963a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.056605  465613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7 ...
	I1009 19:37:36.056621  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7: {Name:mk8221c4cea0ade3667b466c605564d8fef0e3da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.056707  465613 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7 -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt
	I1009 19:37:36.056784  465613 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7 -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key
	I1009 19:37:36.056845  465613 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key
	I1009 19:37:36.056869  465613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt with IP's: []
	I1009 19:37:36.484391  465613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt ...
	I1009 19:37:36.484460  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt: {Name:mka668a089b506fdf2b3e2713eefbbeb90139f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.484683  465613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key ...
	I1009 19:37:36.484699  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key: {Name:mk7efbe62dd124414300523e0c1dfb790f5ad6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.484888  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:37:36.484932  465613 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:37:36.484946  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:37:36.484969  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:37:36.484994  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:37:36.485026  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:37:36.485072  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:37:36.485627  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:37:36.507029  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:37:36.526403  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:37:36.544917  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:37:36.564012  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:37:36.582277  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:37:36.600372  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:37:36.619057  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:37:36.637529  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:37:36.655737  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:37:36.673785  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:37:36.696704  465613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:37:36.710703  465613 ssh_runner.go:195] Run: openssl version
	I1009 19:37:36.718964  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:37:36.728317  465613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:37:36.732926  465613 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:37:36.733041  465613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:37:36.775949  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:37:36.785337  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:37:36.794749  465613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:37:36.799418  465613 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:37:36.799527  465613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:37:36.841232  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:37:36.851049  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:37:36.859593  465613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:36.863901  465613 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:36.863970  465613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:36.909222  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:37:36.917869  465613 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:37:36.921649  465613 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:37:36.921734  465613 kubeadm.go:400] StartCluster: {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:36.921818  465613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:37:36.921878  465613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:37:36.952578  465613 cri.go:89] found id: ""
	I1009 19:37:36.952659  465613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:37:36.962456  465613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:37:36.970886  465613 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:37:36.970999  465613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:37:36.978877  465613 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:37:36.978896  465613 kubeadm.go:157] found existing configuration files:
	
	I1009 19:37:36.978977  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:37:36.986614  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:37:36.986721  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:37:36.994152  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:37:37.008592  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:37:37.008675  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:37:37.017329  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:37:37.027373  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:37:37.027468  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:37:37.036193  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:37:37.044613  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:37:37.044703  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:37:37.052676  465613 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:37:37.096704  465613 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:37:37.096767  465613 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:37:37.127490  465613 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:37:37.127654  465613 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:37:37.127732  465613 kubeadm.go:318] OS: Linux
	I1009 19:37:37.127808  465613 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:37:37.127886  465613 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:37:37.127970  465613 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:37:37.128085  465613 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:37:37.128167  465613 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:37:37.128286  465613 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:37:37.128348  465613 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:37:37.128416  465613 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:37:37.128472  465613 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:37:37.204197  465613 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:37:37.204315  465613 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:37:37.204422  465613 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:37:37.230626  465613 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:37:34.783191  467248 addons.go:514] duration metric: took 10.909505053s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 19:37:34.790193  467248 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:37:34.791743  467248 api_server.go:141] control plane version: v1.28.0
	I1009 19:37:34.791806  467248 api_server.go:131] duration metric: took 12.680353ms to wait for apiserver health ...
	I1009 19:37:34.791829  467248 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:37:34.797777  467248 system_pods.go:59] 8 kube-system pods found
	I1009 19:37:34.797807  467248 system_pods.go:61] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:37:34.797818  467248 system_pods.go:61] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:37:34.797824  467248 system_pods.go:61] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:37:34.797832  467248 system_pods.go:61] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:37:34.797839  467248 system_pods.go:61] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:37:34.797845  467248 system_pods.go:61] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:37:34.797853  467248 system_pods.go:61] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:37:34.797857  467248 system_pods.go:61] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Running
	I1009 19:37:34.797862  467248 system_pods.go:74] duration metric: took 6.015204ms to wait for pod list to return data ...
	I1009 19:37:34.797869  467248 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:37:34.801526  467248 default_sa.go:45] found service account: "default"
	I1009 19:37:34.801600  467248 default_sa.go:55] duration metric: took 3.724349ms for default service account to be created ...
	I1009 19:37:34.801625  467248 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:37:34.805978  467248 system_pods.go:86] 8 kube-system pods found
	I1009 19:37:34.806059  467248 system_pods.go:89] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:37:34.806084  467248 system_pods.go:89] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:37:34.806107  467248 system_pods.go:89] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:37:34.806189  467248 system_pods.go:89] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:37:34.806218  467248 system_pods.go:89] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:37:34.806236  467248 system_pods.go:89] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:37:34.806272  467248 system_pods.go:89] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:37:34.806300  467248 system_pods.go:89] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Running
	I1009 19:37:34.806326  467248 system_pods.go:126] duration metric: took 4.67946ms to wait for k8s-apps to be running ...
	I1009 19:37:34.806359  467248 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:37:34.806447  467248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:34.828166  467248 system_svc.go:56] duration metric: took 21.798394ms WaitForService to wait for kubelet
	I1009 19:37:34.828245  467248 kubeadm.go:586] duration metric: took 10.954907653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:37:34.828298  467248 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:37:34.832608  467248 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:37:34.832682  467248 node_conditions.go:123] node cpu capacity is 2
	I1009 19:37:34.832721  467248 node_conditions.go:105] duration metric: took 4.398399ms to run NodePressure ...
	I1009 19:37:34.832752  467248 start.go:241] waiting for startup goroutines ...
	I1009 19:37:34.832775  467248 start.go:246] waiting for cluster config update ...
	I1009 19:37:34.832813  467248 start.go:255] writing updated cluster config ...
	I1009 19:37:34.833150  467248 ssh_runner.go:195] Run: rm -f paused
	I1009 19:37:34.838210  467248 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:37:34.843541  467248 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ftv2x" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:37:36.850748  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:37.232924  465613 out.go:252]   - Generating certificates and keys ...
	I1009 19:37:37.233025  465613 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:37:37.233113  465613 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:37:37.409623  465613 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:37:37.648129  465613 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:37:38.528993  465613 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:37:39.419929  465613 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1009 19:37:39.352696  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:41.850910  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:43.864583  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:40.183541  465613 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:37:40.183841  465613 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-678119] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:37:40.784626  465613 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:37:40.784942  465613 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-678119] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:37:41.182394  465613 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:37:41.553039  465613 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:37:42.008236  465613 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:37:42.009018  465613 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:37:42.742573  465613 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:37:43.559364  465613 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:37:44.569344  465613 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:37:44.918478  465613 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:37:46.001308  465613 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:37:46.001450  465613 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:37:46.004971  465613 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1009 19:37:46.350025  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:48.354369  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:46.006290  465613 out.go:252]   - Booting up control plane ...
	I1009 19:37:46.006423  465613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:37:46.006510  465613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:37:46.008330  465613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:37:46.034712  465613 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:37:46.035428  465613 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:37:46.054602  465613 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:37:46.054726  465613 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:37:46.054771  465613 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:37:46.225016  465613 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:37:46.225152  465613 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:37:47.725913  465613 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500901627s
	I1009 19:37:47.729609  465613 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:37:47.729709  465613 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 19:37:47.729802  465613 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:37:47.729888  465613 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1009 19:37:50.851734  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:52.852089  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:56.603242  465613 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 8.868746401s
	I1009 19:37:56.969029  465613 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.239430012s
	I1009 19:37:58.731892  465613 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.001977221s
	I1009 19:37:58.757918  465613 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:37:58.775113  465613 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:37:58.789916  465613 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:37:58.790158  465613 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-678119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:37:58.804732  465613 kubeadm.go:318] [bootstrap-token] Using token: bja34r.6kzea7cmbq4vjgav
	W1009 19:37:54.852635  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:57.350188  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:58.808933  465613 out.go:252]   - Configuring RBAC rules ...
	I1009 19:37:58.809059  465613 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:37:58.813502  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:37:58.824340  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:37:58.829527  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:37:58.836667  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:37:58.840835  465613 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:37:59.140158  465613 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:37:59.584462  465613 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:38:00.161335  465613 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:38:00.161366  465613 kubeadm.go:318] 
	I1009 19:38:00.161430  465613 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:38:00.161436  465613 kubeadm.go:318] 
	I1009 19:38:00.161534  465613 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:38:00.161541  465613 kubeadm.go:318] 
	I1009 19:38:00.161568  465613 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:38:00.161630  465613 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:38:00.161685  465613 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:38:00.161690  465613 kubeadm.go:318] 
	I1009 19:38:00.161747  465613 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:38:00.161759  465613 kubeadm.go:318] 
	I1009 19:38:00.161820  465613 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:38:00.161826  465613 kubeadm.go:318] 
	I1009 19:38:00.161880  465613 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:38:00.161960  465613 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:38:00.162032  465613 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:38:00.162037  465613 kubeadm.go:318] 
	I1009 19:38:00.162144  465613 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:38:00.162228  465613 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:38:00.162233  465613 kubeadm.go:318] 
	I1009 19:38:00.162331  465613 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token bja34r.6kzea7cmbq4vjgav \
	I1009 19:38:00.162440  465613 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:38:00.163250  465613 kubeadm.go:318] 	--control-plane 
	I1009 19:38:00.163271  465613 kubeadm.go:318] 
	I1009 19:38:00.163364  465613 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:38:00.163383  465613 kubeadm.go:318] 
	I1009 19:38:00.163477  465613 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token bja34r.6kzea7cmbq4vjgav \
	I1009 19:38:00.163737  465613 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:38:00.203019  465613 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:38:00.205463  465613 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:38:00.205615  465613 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:38:00.205659  465613 cni.go:84] Creating CNI manager for ""
	I1009 19:38:00.205673  465613 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:00.214910  465613 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:38:00.225216  465613 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:38:00.267484  465613 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:38:00.267506  465613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:38:00.303312  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:38:00.738348  465613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:38:00.738498  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:00.738566  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-678119 minikube.k8s.io/updated_at=2025_10_09T19_38_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=no-preload-678119 minikube.k8s.io/primary=true
	I1009 19:38:00.901197  465613 ops.go:34] apiserver oom_adj: -16
	I1009 19:38:00.901278  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:01.401627  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:01.902397  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:02.402227  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:02.901534  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:03.401401  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:03.902243  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:03.998875  465613 kubeadm.go:1113] duration metric: took 3.260428988s to wait for elevateKubeSystemPrivileges
	I1009 19:38:03.998908  465613 kubeadm.go:402] duration metric: took 27.077205286s to StartCluster
	I1009 19:38:03.998935  465613 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:03.999056  465613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:04.000579  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:04.001073  465613 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:04.001703  465613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:38:04.002910  465613 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:04.003064  465613 addons.go:69] Setting storage-provisioner=true in profile "no-preload-678119"
	I1009 19:38:04.003083  465613 addons.go:238] Setting addon storage-provisioner=true in "no-preload-678119"
	I1009 19:38:04.003140  465613 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:04.003367  465613 addons.go:69] Setting default-storageclass=true in profile "no-preload-678119"
	I1009 19:38:04.003394  465613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-678119"
	I1009 19:38:04.003704  465613 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:04.003869  465613 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:04.005133  465613 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:04.006390  465613 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:04.010094  465613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:04.059665  465613 addons.go:238] Setting addon default-storageclass=true in "no-preload-678119"
	I1009 19:38:04.059725  465613 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:04.060484  465613 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:04.061807  465613 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1009 19:37:59.850393  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:38:02.349266  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:38:04.064871  465613 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:04.064896  465613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:04.064960  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:04.088072  465613 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:04.088104  465613 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:04.088182  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:04.121473  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:04.132076  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:04.293233  465613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:38:04.382534  465613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:04.398737  465613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:04.429985  465613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:05.122496  465613 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 19:38:05.124516  465613 node_ready.go:35] waiting up to 6m0s for node "no-preload-678119" to be "Ready" ...
	I1009 19:38:05.423206  465613 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1009 19:38:04.351648  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:38:06.354556  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:38:08.849808  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:38:05.426040  465613 addons.go:514] duration metric: took 1.423124141s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 19:38:05.630202  465613 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-678119" context rescaled to 1 replicas
	W1009 19:38:07.130391  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	W1009 19:38:09.629049  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	I1009 19:38:11.349634  467248 pod_ready.go:94] pod "coredns-5dd5756b68-ftv2x" is "Ready"
	I1009 19:38:11.349667  467248 pod_ready.go:86] duration metric: took 36.506055517s for pod "coredns-5dd5756b68-ftv2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.353029  467248 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.358424  467248 pod_ready.go:94] pod "etcd-old-k8s-version-271815" is "Ready"
	I1009 19:38:11.358451  467248 pod_ready.go:86] duration metric: took 5.400857ms for pod "etcd-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.361447  467248 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.366435  467248 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-271815" is "Ready"
	I1009 19:38:11.366464  467248 pod_ready.go:86] duration metric: took 4.995486ms for pod "kube-apiserver-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.369493  467248 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.547782  467248 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-271815" is "Ready"
	I1009 19:38:11.547856  467248 pod_ready.go:86] duration metric: took 178.325949ms for pod "kube-controller-manager-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.748044  467248 pod_ready.go:83] waiting for pod "kube-proxy-7j6jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.147538  467248 pod_ready.go:94] pod "kube-proxy-7j6jw" is "Ready"
	I1009 19:38:12.147565  467248 pod_ready.go:86] duration metric: took 399.493667ms for pod "kube-proxy-7j6jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.347608  467248 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.747106  467248 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-271815" is "Ready"
	I1009 19:38:12.747136  467248 pod_ready.go:86] duration metric: took 399.500233ms for pod "kube-scheduler-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.747148  467248 pod_ready.go:40] duration metric: took 37.908857911s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:38:12.808149  467248 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1009 19:38:12.811360  467248 out.go:203] 
	W1009 19:38:12.814340  467248 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1009 19:38:12.817264  467248 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1009 19:38:12.820197  467248 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-271815" cluster and "default" namespace by default
	W1009 19:38:12.128866  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	W1009 19:38:14.129491  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	W1009 19:38:16.629348  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	I1009 19:38:18.632326  465613 node_ready.go:49] node "no-preload-678119" is "Ready"
	I1009 19:38:18.632352  465613 node_ready.go:38] duration metric: took 13.506496399s for node "no-preload-678119" to be "Ready" ...
	I1009 19:38:18.632365  465613 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:38:18.632425  465613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:38:18.650307  465613 api_server.go:72] duration metric: took 14.649195447s to wait for apiserver process to appear ...
	I1009 19:38:18.650334  465613 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:38:18.650354  465613 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:38:18.662701  465613 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:38:18.663700  465613 api_server.go:141] control plane version: v1.34.1
	I1009 19:38:18.663722  465613 api_server.go:131] duration metric: took 13.380679ms to wait for apiserver health ...
	I1009 19:38:18.663731  465613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:38:18.674828  465613 system_pods.go:59] 8 kube-system pods found
	I1009 19:38:18.674861  465613 system_pods.go:61] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending
	I1009 19:38:18.674867  465613 system_pods.go:61] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:18.674872  465613 system_pods.go:61] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:18.674876  465613 system_pods.go:61] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:18.674922  465613 system_pods.go:61] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:18.674934  465613 system_pods.go:61] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:18.674939  465613 system_pods.go:61] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:18.674950  465613 system_pods.go:61] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending
	I1009 19:38:18.674959  465613 system_pods.go:74] duration metric: took 11.221632ms to wait for pod list to return data ...
	I1009 19:38:18.674992  465613 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:38:18.680465  465613 default_sa.go:45] found service account: "default"
	I1009 19:38:18.680489  465613 default_sa.go:55] duration metric: took 5.489547ms for default service account to be created ...
	I1009 19:38:18.680499  465613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:38:18.692260  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:18.692288  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending
	I1009 19:38:18.692311  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:18.692317  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:18.692322  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:18.692326  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:18.692330  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:18.692335  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:18.692351  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:18.692373  465613 retry.go:31] will retry after 216.368352ms: missing components: kube-dns
	I1009 19:38:18.913178  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:18.913213  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:38:18.913219  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:18.913226  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:18.913232  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:18.913237  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:18.913241  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:18.913251  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:18.913261  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:18.913279  465613 retry.go:31] will retry after 384.003219ms: missing components: kube-dns
	I1009 19:38:19.301508  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:19.301552  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:38:19.301559  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:19.301566  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:19.301603  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:19.301609  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:19.301613  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:19.301617  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:19.301622  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:19.301637  465613 retry.go:31] will retry after 296.341327ms: missing components: kube-dns
	I1009 19:38:19.602357  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:19.602395  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:38:19.602403  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:19.602409  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:19.602413  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:19.602422  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:19.602428  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:19.602432  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:19.602439  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:19.602459  465613 retry.go:31] will retry after 583.646415ms: missing components: kube-dns
	I1009 19:38:20.189921  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:20.189955  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running
	I1009 19:38:20.189962  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:20.189967  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:20.189971  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:20.189977  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:20.189980  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:20.189984  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:20.189989  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:38:20.189996  465613 system_pods.go:126] duration metric: took 1.509491123s to wait for k8s-apps to be running ...
	I1009 19:38:20.190009  465613 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:38:20.190080  465613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:20.204066  465613 system_svc.go:56] duration metric: took 14.047297ms WaitForService to wait for kubelet
	I1009 19:38:20.204094  465613 kubeadm.go:586] duration metric: took 16.202988581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:20.204114  465613 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:38:20.206896  465613 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:38:20.206925  465613 node_conditions.go:123] node cpu capacity is 2
	I1009 19:38:20.206937  465613 node_conditions.go:105] duration metric: took 2.817197ms to run NodePressure ...
	I1009 19:38:20.206950  465613 start.go:241] waiting for startup goroutines ...
	I1009 19:38:20.206957  465613 start.go:246] waiting for cluster config update ...
	I1009 19:38:20.206968  465613 start.go:255] writing updated cluster config ...
	I1009 19:38:20.207265  465613 ssh_runner.go:195] Run: rm -f paused
	I1009 19:38:20.211300  465613 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:38:20.215419  465613 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.220732  465613 pod_ready.go:94] pod "coredns-66bc5c9577-cfmf8" is "Ready"
	I1009 19:38:20.220768  465613 pod_ready.go:86] duration metric: took 5.315752ms for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.223245  465613 pod_ready.go:83] waiting for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.228259  465613 pod_ready.go:94] pod "etcd-no-preload-678119" is "Ready"
	I1009 19:38:20.228287  465613 pod_ready.go:86] duration metric: took 5.014399ms for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.234674  465613 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.241658  465613 pod_ready.go:94] pod "kube-apiserver-no-preload-678119" is "Ready"
	I1009 19:38:20.241715  465613 pod_ready.go:86] duration metric: took 7.011582ms for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.244541  465613 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.615834  465613 pod_ready.go:94] pod "kube-controller-manager-no-preload-678119" is "Ready"
	I1009 19:38:20.615863  465613 pod_ready.go:86] duration metric: took 371.295393ms for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.815962  465613 pod_ready.go:83] waiting for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.216026  465613 pod_ready.go:94] pod "kube-proxy-cf6gt" is "Ready"
	I1009 19:38:21.216052  465613 pod_ready.go:86] duration metric: took 400.060706ms for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.415362  465613 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.815994  465613 pod_ready.go:94] pod "kube-scheduler-no-preload-678119" is "Ready"
	I1009 19:38:21.816027  465613 pod_ready.go:86] duration metric: took 400.63664ms for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.816053  465613 pod_ready.go:40] duration metric: took 1.604707728s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:38:21.882501  465613 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:38:21.890436  465613 out.go:179] * Done! kubectl is now configured to use "no-preload-678119" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.948555666Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.952487943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.952521806Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.952544633Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.955644065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.955676697Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.955700057Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.96024433Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.960279932Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.960304105Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.963619128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.963807233Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.796251802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=756512cf-31de-4c54-91b5-9617ad50c1f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.797868936Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=47b5af2b-182c-4f27-b170-fa32baacfb5a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.79915732Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper" id=071cc4f3-71f3-4f96-88a9-511e9bda1aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.799435042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.806414731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.806948185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.827620729Z" level=info msg="Created container 6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper" id=071cc4f3-71f3-4f96-88a9-511e9bda1aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.829726229Z" level=info msg="Starting container: 6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be" id=15b0c75e-6f1a-4836-bf07-66bc84917252 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.831787389Z" level=info msg="Started container" PID=1712 containerID=6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper id=15b0c75e-6f1a-4836-bf07-66bc84917252 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3
	Oct 09 19:38:18 old-k8s-version-271815 conmon[1710]: conmon 6cfc8e3ac66b23ced830 <ninfo>: container 1712 exited with status 1
	Oct 09 19:38:19 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:19.187318771Z" level=info msg="Removing container: b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3" id=2f3a2912-87b9-4e02-a710-856a41a57219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:38:19 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:19.195851396Z" level=info msg="Error loading conmon cgroup of container b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3: cgroup deleted" id=2f3a2912-87b9-4e02-a710-856a41a57219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:38:19 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:19.19962245Z" level=info msg="Removed container b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper" id=2f3a2912-87b9-4e02-a710-856a41a57219 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	6cfc8e3ac66b2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   2                   5b7d07859200a       dashboard-metrics-scraper-5f989dc9cf-9vwls       kubernetes-dashboard
	e3f12c8476c7f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   c90803e5fe33c       storage-provisioner                              kube-system
	2d63418e6ce2f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   d48535f523137       kubernetes-dashboard-8694d4445c-h9ccf            kubernetes-dashboard
	10744f141a4b0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   eb8686b392d69       coredns-5dd5756b68-ftv2x                         kube-system
	a229c68dff091       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   c90803e5fe33c       storage-provisioner                              kube-system
	58a8b3ce524be       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   745cf6f9e82b2       busybox                                          default
	bac73e4710084       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   f6d98f0435604       kindnet-t5pvl                                    kube-system
	ab150203b138e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   d9125e6c54642       kube-proxy-7j6jw                                 kube-system
	269bca5e10b87       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   f263a1a241982       kube-controller-manager-old-k8s-version-271815   kube-system
	f7644ea5932c5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   4501fdeb4d781       kube-apiserver-old-k8s-version-271815            kube-system
	e3161747cdf01       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   f7bbeb46f8786       etcd-old-k8s-version-271815                      kube-system
	5225c215f7ddf       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   103ff185f1acb       kube-scheduler-old-k8s-version-271815            kube-system
	
	
	==> coredns [10744f141a4b0dfd34d28c3a32335c0845b684257b88d74b758a7fe58035975e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34620 - 4392 "HINFO IN 4559633695322455595.7631379097162551978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021807288s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-271815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-271815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=old-k8s-version-271815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_36_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:36:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-271815
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-271815
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5aa5c1fd2e642859d5aa3878c95a1e2
	  System UUID:                1963e6d2-e326-4444-bd99-5534a70044a9
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-5dd5756b68-ftv2x                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-old-k8s-version-271815                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-t5pvl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-old-k8s-version-271815             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-old-k8s-version-271815    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-7j6jw                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-old-k8s-version-271815             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-9vwls        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-h9ccf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s              kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s              kubelet          Node old-k8s-version-271815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s              kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m11s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           118s               node-controller  Node old-k8s-version-271815 event: Registered Node old-k8s-version-271815 in Controller
	  Normal  NodeReady                103s               kubelet          Node old-k8s-version-271815 status is now: NodeReady
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node old-k8s-version-271815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-271815 event: Registered Node old-k8s-version-271815 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f] <==
	{"level":"info","ts":"2025-10-09T19:37:23.860143Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-09T19:37:23.860235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:37:23.860261Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:37:23.906497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:37:23.906562Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:37:23.906571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:37:23.938999Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-09T19:37:23.941543Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:37:23.941631Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:37:23.943049Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T19:37:23.943141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T19:37:25.36217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-09T19:37:25.362283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-09T19:37:25.362324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-09T19:37:25.362371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.362407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.36245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.362482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.368506Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-271815 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T19:37:25.368604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:37:25.369593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T19:37:25.373688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:37:25.374778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-09T19:37:25.390527Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T19:37:25.390607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:38:27 up  2:20,  0 user,  load average: 4.39, 2.82, 2.22
	Linux old-k8s-version-271815 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bac73e47100848955f3f3f4f9b77a47feeb98dc2c3bf4b8a567178090f45a220] <==
	I1009 19:37:31.714508       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:37:31.722288       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:37:31.722538       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:37:31.722585       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:37:31.722623       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:37:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:37:31.939245       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:37:31.939315       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:37:31.939348       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:37:31.943126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:38:01.940899       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:38:01.943469       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:38:01.943673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:38:01.943751       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 19:38:03.543433       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:38:03.543463       1 metrics.go:72] Registering metrics
	I1009 19:38:03.543537       1 controller.go:711] "Syncing nftables rules"
	I1009 19:38:11.939686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:38:11.940838       1 main.go:301] handling current node
	I1009 19:38:21.939680       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:38:21.939777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d] <==
	I1009 19:37:30.468312       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 19:37:30.471532       1 aggregator.go:166] initial CRD sync complete...
	I1009 19:37:30.471557       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 19:37:30.471564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:37:30.471571       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:37:30.485884       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 19:37:30.485928       1 shared_informer.go:318] Caches are synced for configmaps
	I1009 19:37:30.512956       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1009 19:37:30.543172       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:37:30.952433       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:37:34.425757       1 controller.go:624] quota admission added evaluator for: namespaces
	I1009 19:37:34.548576       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 19:37:34.620054       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:37:34.639488       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:37:34.657775       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 19:37:34.756057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.137.177"}
	I1009 19:37:34.771314       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.234.247"}
	E1009 19:37:40.462434       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1009 19:37:43.353286       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:37:43.368291       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1009 19:37:43.390918       1 controller.go:624] quota admission added evaluator for: endpoints
	E1009 19:37:50.463087       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1009 19:38:00.463936       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1009 19:38:10.464568       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1009 19:38:20.465254       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7] <==
	I1009 19:37:43.399052       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1009 19:37:43.423946       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 19:37:43.439787       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-9vwls"
	I1009 19:37:43.440310       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-h9ccf"
	I1009 19:37:43.450283       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 19:37:43.473970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.270917ms"
	I1009 19:37:43.474428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.315337ms"
	I1009 19:37:43.545410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.312813ms"
	I1009 19:37:43.560300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.794598ms"
	I1009 19:37:43.560903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.215µs"
	I1009 19:37:43.581312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.785199ms"
	I1009 19:37:43.581500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.579µs"
	I1009 19:37:43.585041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.534µs"
	I1009 19:37:43.798303       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 19:37:43.798350       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 19:37:43.807586       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 19:37:52.113290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.041078ms"
	I1009 19:37:52.113367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.83µs"
	I1009 19:37:59.119850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.849µs"
	I1009 19:38:00.252571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.025µs"
	I1009 19:38:01.148266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.027µs"
	I1009 19:38:10.984511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.900909ms"
	I1009 19:38:10.984773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.22µs"
	I1009 19:38:19.208763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.835µs"
	I1009 19:38:23.801965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.768µs"
	
	
	==> kube-proxy [ab150203b138e1f09f9d40149a76bfda555618d4861f5b1ecee8f410e751492a] <==
	I1009 19:37:32.349682       1 server_others.go:69] "Using iptables proxy"
	I1009 19:37:32.663658       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1009 19:37:33.176701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:37:33.181232       1 server_others.go:152] "Using iptables Proxier"
	I1009 19:37:33.181343       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 19:37:33.181377       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 19:37:33.181434       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 19:37:33.181772       1 server.go:846] "Version info" version="v1.28.0"
	I1009 19:37:33.182028       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:37:33.183378       1 config.go:188] "Starting service config controller"
	I1009 19:37:33.183460       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 19:37:33.183504       1 config.go:97] "Starting endpoint slice config controller"
	I1009 19:37:33.183530       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 19:37:33.184231       1 config.go:315] "Starting node config controller"
	I1009 19:37:33.184527       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 19:37:33.285851       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 19:37:33.285908       1 shared_informer.go:318] Caches are synced for service config
	I1009 19:37:33.286255       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c] <==
	I1009 19:37:28.591420       1 serving.go:348] Generated self-signed cert in-memory
	I1009 19:37:31.882853       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1009 19:37:31.882963       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:37:31.905231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1009 19:37:31.905520       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1009 19:37:31.905542       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1009 19:37:31.905565       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1009 19:37:31.926778       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:37:31.926810       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 19:37:31.926838       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:37:31.926844       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 19:37:32.010414       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1009 19:37:32.027732       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 19:37:32.027808       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: I1009 19:37:43.623864     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdnkv\" (UniqueName: \"kubernetes.io/projected/005602fb-94aa-46b2-94ef-5bb2d79d974f-kube-api-access-mdnkv\") pod \"kubernetes-dashboard-8694d4445c-h9ccf\" (UID: \"005602fb-94aa-46b2-94ef-5bb2d79d974f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-h9ccf"
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: I1009 19:37:43.623982     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9rl\" (UniqueName: \"kubernetes.io/projected/9ef4de73-8be0-4e7a-b14b-b0000e7a60b8-kube-api-access-tb9rl\") pod \"dashboard-metrics-scraper-5f989dc9cf-9vwls\" (UID: \"9ef4de73-8be0-4e7a-b14b-b0000e7a60b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls"
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: I1009 19:37:43.624096     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9ef4de73-8be0-4e7a-b14b-b0000e7a60b8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-9vwls\" (UID: \"9ef4de73-8be0-4e7a-b14b-b0000e7a60b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls"
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: W1009 19:37:43.879778     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-d48535f523137a09df2204ae0a17bc675b5442adfd77deb236694c7237bff103 WatchSource:0}: Error finding container d48535f523137a09df2204ae0a17bc675b5442adfd77deb236694c7237bff103: Status 404 returned error can't find the container with id d48535f523137a09df2204ae0a17bc675b5442adfd77deb236694c7237bff103
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: W1009 19:37:43.914415     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3 WatchSource:0}: Error finding container 5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3: Status 404 returned error can't find the container with id 5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3
	Oct 09 19:37:52 old-k8s-version-271815 kubelet[779]: I1009 19:37:52.087775     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-h9ccf" podStartSLOduration=1.833283271 podCreationTimestamp="2025-10-09 19:37:43 +0000 UTC" firstStartedPulling="2025-10-09 19:37:43.886757991 +0000 UTC m=+21.352174372" lastFinishedPulling="2025-10-09 19:37:51.140444997 +0000 UTC m=+28.605861378" observedRunningTime="2025-10-09 19:37:52.086553952 +0000 UTC m=+29.551970341" watchObservedRunningTime="2025-10-09 19:37:52.086970277 +0000 UTC m=+29.552386666"
	Oct 09 19:37:59 old-k8s-version-271815 kubelet[779]: I1009 19:37:59.090536     779 scope.go:117] "RemoveContainer" containerID="839f52f92a9c3fa928a49e920789f702d9f46086b7f1cae1d9258b0a95f36a9c"
	Oct 09 19:38:00 old-k8s-version-271815 kubelet[779]: I1009 19:38:00.129721     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:00 old-k8s-version-271815 kubelet[779]: I1009 19:38:00.130905     779 scope.go:117] "RemoveContainer" containerID="839f52f92a9c3fa928a49e920789f702d9f46086b7f1cae1d9258b0a95f36a9c"
	Oct 09 19:38:00 old-k8s-version-271815 kubelet[779]: E1009 19:38:00.135162     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:01 old-k8s-version-271815 kubelet[779]: I1009 19:38:01.133454     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:01 old-k8s-version-271815 kubelet[779]: E1009 19:38:01.133753     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:02 old-k8s-version-271815 kubelet[779]: I1009 19:38:02.137612     779 scope.go:117] "RemoveContainer" containerID="a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029"
	Oct 09 19:38:03 old-k8s-version-271815 kubelet[779]: I1009 19:38:03.783186     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:03 old-k8s-version-271815 kubelet[779]: E1009 19:38:03.783637     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:18 old-k8s-version-271815 kubelet[779]: I1009 19:38:18.795241     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:19 old-k8s-version-271815 kubelet[779]: I1009 19:38:19.182841     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:19 old-k8s-version-271815 kubelet[779]: I1009 19:38:19.183152     779 scope.go:117] "RemoveContainer" containerID="6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	Oct 09 19:38:19 old-k8s-version-271815 kubelet[779]: E1009 19:38:19.183467     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:23 old-k8s-version-271815 kubelet[779]: I1009 19:38:23.783195     779 scope.go:117] "RemoveContainer" containerID="6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	Oct 09 19:38:23 old-k8s-version-271815 kubelet[779]: E1009 19:38:23.784000     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:25 old-k8s-version-271815 kubelet[779]: I1009 19:38:25.054215     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 09 19:38:25 old-k8s-version-271815 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:38:25 old-k8s-version-271815 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:38:25 old-k8s-version-271815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2d63418e6ce2f77eca69e74ff9ea7e78acc2dc61f5289982a96c1ac9c78d7392] <==
	2025/10/09 19:37:51 Using namespace: kubernetes-dashboard
	2025/10/09 19:37:51 Using in-cluster config to connect to apiserver
	2025/10/09 19:37:51 Using secret token for csrf signing
	2025/10/09 19:37:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:37:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:37:51 Successful initial request to the apiserver, version: v1.28.0
	2025/10/09 19:37:51 Generating JWE encryption key
	2025/10/09 19:37:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:37:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:37:51 Initializing JWE encryption key from synchronized object
	2025/10/09 19:37:51 Creating in-cluster Sidecar client
	2025/10/09 19:37:51 Serving insecurely on HTTP port: 9090
	2025/10/09 19:37:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:38:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:37:51 Starting overwatch
	
	
	==> storage-provisioner [a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029] <==
	I1009 19:37:31.906205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:38:02.010986       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e3f12c8476c7f675ccdaedbeec3d39577c01787f3e043afaf02faad8eef8a730] <==
	I1009 19:38:02.207604       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:38:02.230310       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:38:02.230371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 19:38:19.627051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:38:19.627741       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-271815_b7b4dc8e-2340-4252-a763-ca31f3ea6d7a!
	I1009 19:38:19.627566       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89b8e788-298a-4512-8566-f2088b6d05b0", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-271815_b7b4dc8e-2340-4252-a763-ca31f3ea6d7a became leader
	I1009 19:38:19.728655       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-271815_b7b4dc8e-2340-4252-a763-ca31f3ea6d7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-271815 -n old-k8s-version-271815
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-271815 -n old-k8s-version-271815: exit status 2 (418.470666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-271815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-271815
helpers_test.go:243: (dbg) docker inspect old-k8s-version-271815:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980",
	        "Created": "2025-10-09T19:35:50.362074272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 467432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:37:14.685804818Z",
	            "FinishedAt": "2025-10-09T19:37:13.667247119Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/hostname",
	        "HostsPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/hosts",
	        "LogPath": "/var/lib/docker/containers/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980-json.log",
	        "Name": "/old-k8s-version-271815",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-271815:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-271815",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980",
	                "LowerDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/785167fbab6f7e4dc7fb68fd78c2538d6858b3b6a49ddc1cf74e25a763684d63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-271815",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-271815/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-271815",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-271815",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-271815",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c654eb20238f6304afc2c1634891d448b2c00dd384949552091221bcf1a44cc3",
	            "SandboxKey": "/var/run/docker/netns/c654eb20238f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-271815": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:fc:97:5d:a5:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9b70e298602c2790a8fd04817e351fbf4f06c3fbce53648b556f8d8aa63fa4cc",
	                    "EndpointID": "231032790e32f835603f3b747b9a1e82fec228c6e57bccfed131ab97aba8ba39",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-271815",
	                        "395bb50f3c39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815: exit status 2 (347.286719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-271815 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-271815 logs -n 25: (1.41500071s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-224541 sudo containerd config dump                                                                                                                                                                                                  │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ -p cilium-224541 sudo crio config                                                                                                                                                                                                             │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ delete  │ -p cilium-224541                                                                                                                                                                                                                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ start   │ -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ force-systemd-flag-476949 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-flag-476949                                                                                                                                                                                                                  │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │ 09 Oct 25 19:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-271815 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:37:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:37:14.325897  467248 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:37:14.326103  467248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:14.326116  467248 out.go:374] Setting ErrFile to fd 2...
	I1009 19:37:14.326122  467248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:37:14.326445  467248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:37:14.326897  467248 out.go:368] Setting JSON to false
	I1009 19:37:14.328040  467248 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8386,"bootTime":1760030249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:37:14.328114  467248 start.go:141] virtualization:  
	I1009 19:37:14.330967  467248 out.go:179] * [old-k8s-version-271815] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:37:14.334806  467248 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:37:14.334939  467248 notify.go:220] Checking for updates...
	I1009 19:37:14.340759  467248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:37:14.343750  467248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:37:14.346610  467248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:37:14.349565  467248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:37:14.352656  467248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:37:14.356169  467248 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:37:14.359736  467248 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1009 19:37:14.362718  467248 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:37:14.388788  467248 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:37:14.388896  467248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:14.455661  467248 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-09 19:37:14.445631786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:37:14.455769  467248 docker.go:318] overlay module found
	I1009 19:37:14.459065  467248 out.go:179] * Using the docker driver based on existing profile
	I1009 19:37:14.462099  467248 start.go:305] selected driver: docker
	I1009 19:37:14.462118  467248 start.go:925] validating driver "docker" against &{Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:14.462266  467248 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:37:14.462978  467248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:37:14.556008  467248 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-09 19:37:14.540980075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:37:14.556370  467248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:37:14.556391  467248 cni.go:84] Creating CNI manager for ""
	I1009 19:37:14.556470  467248 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:37:14.556521  467248 start.go:349] cluster config:
	{Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:14.559745  467248 out.go:179] * Starting "old-k8s-version-271815" primary control-plane node in "old-k8s-version-271815" cluster
	I1009 19:37:14.562625  467248 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:37:14.565679  467248 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:37:14.568547  467248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 19:37:14.568610  467248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 19:37:14.568624  467248 cache.go:64] Caching tarball of preloaded images
	I1009 19:37:14.568733  467248 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:37:14.568751  467248 cache.go:67] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 19:37:14.568884  467248 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/config.json ...
	I1009 19:37:14.569168  467248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:37:14.601996  467248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:37:14.602022  467248 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:37:14.602036  467248 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:37:14.602058  467248 start.go:360] acquireMachinesLock for old-k8s-version-271815: {Name:mk2253e3ad61415788b159368a95085c5f2eeced Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:37:14.602190  467248 start.go:364] duration metric: took 93.466µs to acquireMachinesLock for "old-k8s-version-271815"
	I1009 19:37:14.602229  467248 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:37:14.602242  467248 fix.go:54] fixHost starting: 
	I1009 19:37:14.602567  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:14.634425  467248 fix.go:112] recreateIfNeeded on old-k8s-version-271815: state=Stopped err=<nil>
	W1009 19:37:14.634455  467248 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:37:14.190278  465613 cli_runner.go:164] Run: docker network inspect no-preload-678119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:37:14.208218  465613 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:37:14.213450  465613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:14.224915  465613 kubeadm.go:883] updating cluster {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:37:14.225020  465613 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:37:14.225065  465613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:37:14.259991  465613 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1009 19:37:14.260021  465613 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 19:37:14.260062  465613 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:14.260271  465613 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.260378  465613 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.260493  465613 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.260593  465613 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.260694  465613 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1009 19:37:14.260784  465613 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.260879  465613 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.263119  465613 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.263132  465613 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.263205  465613 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:14.263253  465613 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.263402  465613 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.263407  465613 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.263462  465613 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1009 19:37:14.263533  465613 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.504806  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.519360  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.530971  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1009 19:37:14.532534  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.534432  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.535095  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.544329  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.636645  465613 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1009 19:37:14.636696  465613 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.636769  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.709908  465613 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1009 19:37:14.709960  465613 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.710015  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.778407  465613 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1009 19:37:14.778453  465613 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.778519  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.778612  465613 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1009 19:37:14.778643  465613 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1009 19:37:14.778675  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809099  465613 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1009 19:37:14.809147  465613 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.809198  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809270  465613 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1009 19:37:14.809292  465613 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.809319  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809398  465613 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1009 19:37:14.809420  465613 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.809447  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:14.809522  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.809614  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.809692  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.809805  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1009 19:37:14.907893  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:14.908054  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:14.908102  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:14.908205  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:14.908257  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:14.908302  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:14.908329  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1009 19:37:14.637749  467248 out.go:252] * Restarting existing docker container for "old-k8s-version-271815" ...
	I1009 19:37:14.637841  467248 cli_runner.go:164] Run: docker start old-k8s-version-271815
	I1009 19:37:15.051640  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:15.094311  467248 kic.go:430] container "old-k8s-version-271815" state is running.
	I1009 19:37:15.094704  467248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-271815
	I1009 19:37:15.122989  467248 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/config.json ...
	I1009 19:37:15.123238  467248 machine.go:93] provisionDockerMachine start ...
	I1009 19:37:15.123310  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:15.157785  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:15.158110  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:15.158119  467248 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:37:15.159130  467248 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41414->127.0.0.1:33430: read: connection reset by peer
	I1009 19:37:18.326015  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-271815
	
	I1009 19:37:18.326047  467248 ubuntu.go:182] provisioning hostname "old-k8s-version-271815"
	I1009 19:37:18.326108  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:18.350364  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:18.350673  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:18.350691  467248 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-271815 && echo "old-k8s-version-271815" | sudo tee /etc/hostname
	I1009 19:37:18.517150  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-271815
	
	I1009 19:37:18.517309  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:18.550960  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:18.551338  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:18.551357  467248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-271815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-271815/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-271815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:37:18.706474  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:37:18.706551  467248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:37:18.706588  467248 ubuntu.go:190] setting up certificates
	I1009 19:37:18.706622  467248 provision.go:84] configureAuth start
	I1009 19:37:18.706709  467248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-271815
	I1009 19:37:18.728701  467248 provision.go:143] copyHostCerts
	I1009 19:37:18.728766  467248 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:37:18.728783  467248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:37:18.728862  467248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:37:18.728962  467248 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:37:18.728968  467248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:37:18.728994  467248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:37:18.729045  467248 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:37:18.729049  467248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:37:18.729071  467248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:37:18.729115  467248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-271815 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-271815]
	I1009 19:37:19.031582  467248 provision.go:177] copyRemoteCerts
	I1009 19:37:19.031704  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:37:19.031762  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.049070  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:19.156033  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:37:19.180905  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:37:19.208610  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:37:19.236320  467248 provision.go:87] duration metric: took 529.656078ms to configureAuth
	I1009 19:37:19.236389  467248 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:37:19.236605  467248 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:37:19.236751  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.267911  467248 main.go:141] libmachine: Using SSH client type: native
	I1009 19:37:19.268214  467248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1009 19:37:19.268228  467248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:37:15.148039  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1009 19:37:15.148114  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:15.148163  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:15.148209  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:15.148264  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1009 19:37:15.148304  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1009 19:37:15.148351  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1009 19:37:15.344043  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1009 19:37:15.344146  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1009 19:37:15.344210  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1009 19:37:15.344260  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1009 19:37:15.344321  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1009 19:37:15.344376  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1009 19:37:15.344431  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 19:37:15.344494  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1009 19:37:15.344547  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1009 19:37:15.344599  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1009 19:37:15.344647  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1009 19:37:15.434026  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1009 19:37:15.434311  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1009 19:37:15.434114  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1009 19:37:15.434435  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1009 19:37:15.434149  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1009 19:37:15.434546  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1009 19:37:15.434218  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1009 19:37:15.434704  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1009 19:37:15.434251  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1009 19:37:15.434834  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1009 19:37:15.434271  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1009 19:37:15.434923  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1009 19:37:15.434181  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1009 19:37:15.435045  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	W1009 19:37:15.520388  465613 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 19:37:15.520572  465613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:15.527020  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1009 19:37:15.527062  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1009 19:37:15.527140  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1009 19:37:15.527206  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1009 19:37:15.527220  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1009 19:37:15.527309  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1009 19:37:15.561202  465613 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1009 19:37:15.561333  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1009 19:37:15.581082  465613 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1009 19:37:15.581166  465613 retry.go:31] will retry after 134.152426ms: ssh: rejected: connect failed (open failed)
	I1009 19:37:15.716209  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:37:15.744824  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:37:15.803237  465613 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 19:37:15.803327  465613 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:15.803400  465613 ssh_runner.go:195] Run: which crictl
	I1009 19:37:15.803483  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:37:15.847104  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:37:16.133326  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:16.133504  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1009 19:37:16.188192  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1009 19:37:16.188326  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1009 19:37:16.268409  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:18.210811  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.02243924s)
	I1009 19:37:18.210837  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1009 19:37:18.210857  465613 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1009 19:37:18.210855  465613 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942414218s)
	I1009 19:37:18.210904  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1009 19:37:18.210917  465613 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:19.677765  467248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:37:19.677791  467248 machine.go:96] duration metric: took 4.554542396s to provisionDockerMachine
	I1009 19:37:19.677803  467248 start.go:293] postStartSetup for "old-k8s-version-271815" (driver="docker")
	I1009 19:37:19.677830  467248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:37:19.677909  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:37:19.677956  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.703348  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:19.811094  467248 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:37:19.814920  467248 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:37:19.814992  467248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:37:19.815018  467248 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:37:19.815097  467248 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:37:19.815217  467248 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:37:19.815355  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:37:19.823229  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:37:19.845761  467248 start.go:296] duration metric: took 167.942931ms for postStartSetup
	I1009 19:37:19.845896  467248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:37:19.845978  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.863767  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:19.967489  467248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:37:19.975594  467248 fix.go:56] duration metric: took 5.373344999s for fixHost
	I1009 19:37:19.975668  467248 start.go:83] releasing machines lock for "old-k8s-version-271815", held for 5.373458156s
	I1009 19:37:19.975762  467248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-271815
	I1009 19:37:19.993655  467248 ssh_runner.go:195] Run: cat /version.json
	I1009 19:37:19.993694  467248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:37:19.993715  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:19.993748  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:20.031004  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:20.037687  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:20.154873  467248 ssh_runner.go:195] Run: systemctl --version
	I1009 19:37:20.246813  467248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:37:20.300731  467248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:37:20.305744  467248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:37:20.305846  467248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:37:20.314753  467248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:37:20.314779  467248 start.go:495] detecting cgroup driver to use...
	I1009 19:37:20.314842  467248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:37:20.314910  467248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:37:20.331003  467248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:37:20.346312  467248 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:37:20.346405  467248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:37:20.363374  467248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:37:20.377809  467248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:37:20.510355  467248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:37:20.644006  467248 docker.go:234] disabling docker service ...
	I1009 19:37:20.644130  467248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:37:20.659336  467248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:37:20.674619  467248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:37:20.829171  467248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:37:20.981224  467248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:37:20.996513  467248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:37:21.014065  467248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 19:37:21.014219  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.024035  467248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:37:21.024169  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.033896  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.043737  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.053477  467248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:37:21.062420  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.080946  467248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.091594  467248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:37:21.107967  467248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:37:21.120465  467248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:37:21.131337  467248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:21.335871  467248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:37:21.890428  467248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:37:21.890545  467248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:37:21.896041  467248 start.go:563] Will wait 60s for crictl version
	I1009 19:37:21.896218  467248 ssh_runner.go:195] Run: which crictl
	I1009 19:37:21.900948  467248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:37:21.931711  467248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:37:21.931848  467248 ssh_runner.go:195] Run: crio --version
	I1009 19:37:21.967583  467248 ssh_runner.go:195] Run: crio --version
	I1009 19:37:22.005968  467248 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1009 19:37:22.009290  467248 cli_runner.go:164] Run: docker network inspect old-k8s-version-271815 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:37:22.031895  467248 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:37:22.036217  467248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:22.047018  467248 kubeadm.go:883] updating cluster {Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:37:22.047139  467248 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 19:37:22.047191  467248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:37:22.093550  467248 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:37:22.093570  467248 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:37:22.093655  467248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:37:22.123246  467248 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:37:22.123266  467248 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:37:22.123273  467248 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1009 19:37:22.123375  467248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-271815 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:37:22.123452  467248 ssh_runner.go:195] Run: crio config
	I1009 19:37:22.202258  467248 cni.go:84] Creating CNI manager for ""
	I1009 19:37:22.202328  467248 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:37:22.202362  467248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:37:22.202411  467248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-271815 NodeName:old-k8s-version-271815 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:37:22.202654  467248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-271815"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:37:22.202753  467248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1009 19:37:22.214785  467248 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:37:22.214910  467248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:37:22.227417  467248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1009 19:37:22.261590  467248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:37:22.284664  467248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1009 19:37:22.308919  467248 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:37:22.313040  467248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:22.323990  467248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:22.499417  467248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:37:22.515894  467248 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815 for IP: 192.168.85.2
	I1009 19:37:22.515964  467248 certs.go:195] generating shared ca certs ...
	I1009 19:37:22.516017  467248 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:22.516194  467248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:37:22.516271  467248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:37:22.516293  467248 certs.go:257] generating profile certs ...
	I1009 19:37:22.516426  467248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.key
	I1009 19:37:22.516540  467248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/apiserver.key.008660bc
	I1009 19:37:22.516629  467248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/proxy-client.key
	I1009 19:37:22.516772  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:37:22.516848  467248 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:37:22.516874  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:37:22.516939  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:37:22.516992  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:37:22.517044  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:37:22.517122  467248 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:37:22.517893  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:37:22.545005  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:37:22.577660  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:37:22.628816  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:37:22.692539  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 19:37:22.742760  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:37:22.803320  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:37:22.824909  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:37:22.844559  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:37:22.864083  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:37:22.883037  467248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:37:22.903175  467248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:37:22.917466  467248 ssh_runner.go:195] Run: openssl version
	I1009 19:37:22.923985  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:37:22.933031  467248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:37:22.937819  467248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:37:22.937950  467248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:37:22.979535  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:37:22.988705  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:37:22.997937  467248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:37:23.002756  467248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:37:23.002904  467248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:37:23.050997  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:37:23.059681  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:37:23.069113  467248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:23.073488  467248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:23.073632  467248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:23.119129  467248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:37:23.127870  467248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:37:23.132422  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:37:23.174423  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:37:23.235524  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:37:23.375598  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:37:23.466701  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:37:23.557842  467248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:37:23.698162  467248 kubeadm.go:400] StartCluster: {Name:old-k8s-version-271815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-271815 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:23.698307  467248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:37:23.698410  467248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:37:23.790992  467248 cri.go:89] found id: "269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7"
	I1009 19:37:23.791062  467248 cri.go:89] found id: "f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d"
	I1009 19:37:23.791083  467248 cri.go:89] found id: "e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f"
	I1009 19:37:23.791117  467248 cri.go:89] found id: "5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c"
	I1009 19:37:23.791153  467248 cri.go:89] found id: ""
	I1009 19:37:23.791238  467248 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:37:23.809089  467248 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:37:23Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:37:23.809244  467248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:37:23.819707  467248 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:37:23.819765  467248 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:37:23.819844  467248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:37:23.838601  467248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:37:23.839122  467248 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-271815" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:37:23.839288  467248 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-271815" cluster setting kubeconfig missing "old-k8s-version-271815" context setting]
	I1009 19:37:23.839663  467248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:23.841324  467248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:37:23.872049  467248 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:37:23.872129  467248 kubeadm.go:601] duration metric: took 52.33549ms to restartPrimaryControlPlane
	I1009 19:37:23.872154  467248 kubeadm.go:402] duration metric: took 174.001979ms to StartCluster
	I1009 19:37:23.872193  467248 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:23.872276  467248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:37:23.873008  467248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:23.873284  467248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:37:23.873674  467248 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:37:23.873758  467248 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-271815"
	I1009 19:37:23.873770  467248 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-271815"
	W1009 19:37:23.873777  467248 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:37:23.873797  467248 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:37:23.874294  467248 config.go:182] Loaded profile config "old-k8s-version-271815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 19:37:23.874356  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.874376  467248 addons.go:69] Setting dashboard=true in profile "old-k8s-version-271815"
	I1009 19:37:23.874387  467248 addons.go:238] Setting addon dashboard=true in "old-k8s-version-271815"
	W1009 19:37:23.874393  467248 addons.go:247] addon dashboard should already be in state true
	I1009 19:37:23.874412  467248 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:37:23.874883  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.875125  467248 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-271815"
	I1009 19:37:23.875148  467248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-271815"
	I1009 19:37:23.875399  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.890500  467248 out.go:179] * Verifying Kubernetes components...
	I1009 19:37:23.898638  467248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:23.919819  467248 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-271815"
	W1009 19:37:23.919842  467248 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:37:23.919865  467248 host.go:66] Checking if "old-k8s-version-271815" exists ...
	I1009 19:37:23.920279  467248 cli_runner.go:164] Run: docker container inspect old-k8s-version-271815 --format={{.State.Status}}
	I1009 19:37:23.937306  467248 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:37:23.940213  467248 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:37:23.940258  467248 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:37:23.940282  467248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:37:23.940351  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:23.949169  467248 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:37:23.952297  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:37:23.952327  467248 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:37:23.952400  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:23.990006  467248 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:37:23.990027  467248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:37:23.990092  467248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-271815
	I1009 19:37:24.010574  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:24.030082  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:24.039394  467248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/old-k8s-version-271815/id_rsa Username:docker}
	I1009 19:37:24.273596  467248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:37:20.458116  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.247191413s)
	I1009 19:37:20.458159  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1009 19:37:20.458167  465613 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.247230798s)
	I1009 19:37:20.458178  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1009 19:37:20.458204  465613 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 19:37:20.458226  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1009 19:37:20.458279  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 19:37:22.174925  465613 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.716622932s)
	I1009 19:37:22.174957  465613 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1009 19:37:22.174983  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1009 19:37:22.175057  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.71682094s)
	I1009 19:37:22.175071  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1009 19:37:22.175091  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1009 19:37:22.175136  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1009 19:37:24.120444  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.945271589s)
	I1009 19:37:24.120472  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1009 19:37:24.120490  465613 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1009 19:37:24.120539  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1009 19:37:24.328918  467248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:37:24.385773  467248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:37:24.394501  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:37:24.394526  467248 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:37:24.583927  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:37:24.584002  467248 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:37:24.739374  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:37:24.739445  467248 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:37:24.788870  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:37:24.788929  467248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:37:24.844579  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:37:24.844652  467248 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:37:24.875399  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:37:24.875475  467248 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:37:24.917242  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:37:24.917318  467248 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:37:24.956335  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:37:24.956412  467248 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:37:24.993068  467248 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:37:24.993140  467248 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:37:25.032742  467248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:37:26.221503  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.100937939s)
	I1009 19:37:26.221532  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1009 19:37:26.221550  465613 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1009 19:37:26.221599  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1009 19:37:33.605534  467248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.331894193s)
	I1009 19:37:33.605579  467248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.276586847s)
	I1009 19:37:33.605589  467248 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.219742425s)
	I1009 19:37:33.605617  467248 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-271815" to be "Ready" ...
	I1009 19:37:33.675128  467248 node_ready.go:49] node "old-k8s-version-271815" is "Ready"
	I1009 19:37:33.675206  467248 node_ready.go:38] duration metric: took 69.560905ms for node "old-k8s-version-271815" to be "Ready" ...
	I1009 19:37:33.675233  467248 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:37:33.675317  467248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:37:34.778899  467248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.746066492s)
	I1009 19:37:34.779094  467248 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.10373927s)
	I1009 19:37:34.779113  467248 api_server.go:72] duration metric: took 10.905777758s to wait for apiserver process to appear ...
	I1009 19:37:34.779119  467248 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:37:34.779146  467248 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:37:34.780697  467248 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-271815 addons enable metrics-server
	
	I1009 19:37:34.781832  467248 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 19:37:31.295176  465613 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.073551616s)
	I1009 19:37:31.295200  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1009 19:37:31.295216  465613 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 19:37:31.295264  465613 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 19:37:32.247548  465613 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 19:37:32.247580  465613 cache_images.go:124] Successfully loaded all cached images
	I1009 19:37:32.247586  465613 cache_images.go:93] duration metric: took 17.987553221s to LoadCachedImages
	I1009 19:37:32.247598  465613 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:37:32.247683  465613 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-678119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:37:32.247756  465613 ssh_runner.go:195] Run: crio config
	I1009 19:37:32.336695  465613 cni.go:84] Creating CNI manager for ""
	I1009 19:37:32.336773  465613 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:37:32.336815  465613 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:37:32.336865  465613 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-678119 NodeName:no-preload-678119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:37:32.337024  465613 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-678119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:37:32.337130  465613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:37:32.345919  465613 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1009 19:37:32.346037  465613 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1009 19:37:32.356268  465613 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1009 19:37:32.356454  465613 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1009 19:37:32.356856  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1009 19:37:32.356591  465613 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1009 19:37:32.362025  465613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1009 19:37:32.362060  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1009 19:37:33.537315  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1009 19:37:33.563720  465613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:33.572252  465613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1009 19:37:33.572285  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1009 19:37:33.671513  465613 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1009 19:37:33.694805  465613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1009 19:37:33.694891  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1009 19:37:34.544351  465613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:37:34.560187  465613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:37:34.575088  465613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:37:34.609240  465613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 19:37:34.622837  465613 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:37:34.631904  465613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:37:34.644100  465613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:37:34.830738  465613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:37:34.848767  465613 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119 for IP: 192.168.76.2
	I1009 19:37:34.848832  465613 certs.go:195] generating shared ca certs ...
	I1009 19:37:34.848861  465613 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:34.849039  465613 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:37:34.849108  465613 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:37:34.849149  465613 certs.go:257] generating profile certs ...
	I1009 19:37:34.849239  465613 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key
	I1009 19:37:34.849271  465613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt with IP's: []
	I1009 19:37:35.295871  465613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt ...
	I1009 19:37:35.295897  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: {Name:mk71dd1c30258f0b4095df2035cb942a2d8d57c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:35.296096  465613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key ...
	I1009 19:37:35.296105  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key: {Name:mk721a7d11722f195a4be7c6b4dc0780379708f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:35.296183  465613 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7
	I1009 19:37:35.296199  465613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:37:36.056367  465613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7 ...
	I1009 19:37:36.056413  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7: {Name:mk88e7065aaa99e71eda962289cb921a85a5963a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.056605  465613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7 ...
	I1009 19:37:36.056621  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7: {Name:mk8221c4cea0ade3667b466c605564d8fef0e3da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.056707  465613 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt.7093ead7 -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt
	I1009 19:37:36.056784  465613 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7 -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key
	I1009 19:37:36.056845  465613 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key
	I1009 19:37:36.056869  465613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt with IP's: []
	I1009 19:37:36.484391  465613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt ...
	I1009 19:37:36.484460  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt: {Name:mka668a089b506fdf2b3e2713eefbbeb90139f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.484683  465613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key ...
	I1009 19:37:36.484699  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key: {Name:mk7efbe62dd124414300523e0c1dfb790f5ad6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:37:36.484888  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:37:36.484932  465613 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:37:36.484946  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:37:36.484969  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:37:36.484994  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:37:36.485026  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:37:36.485072  465613 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:37:36.485627  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:37:36.507029  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:37:36.526403  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:37:36.544917  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:37:36.564012  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:37:36.582277  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:37:36.600372  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:37:36.619057  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:37:36.637529  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:37:36.655737  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:37:36.673785  465613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:37:36.696704  465613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:37:36.710703  465613 ssh_runner.go:195] Run: openssl version
	I1009 19:37:36.718964  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:37:36.728317  465613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:37:36.732926  465613 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:37:36.733041  465613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:37:36.775949  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:37:36.785337  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:37:36.794749  465613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:37:36.799418  465613 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:37:36.799527  465613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:37:36.841232  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:37:36.851049  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:37:36.859593  465613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:36.863901  465613 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:36.863970  465613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:37:36.909222  465613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:37:36.917869  465613 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:37:36.921649  465613 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:37:36.921734  465613 kubeadm.go:400] StartCluster: {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:37:36.921818  465613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:37:36.921878  465613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:37:36.952578  465613 cri.go:89] found id: ""
	I1009 19:37:36.952659  465613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:37:36.962456  465613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:37:36.970886  465613 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:37:36.970999  465613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:37:36.978877  465613 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:37:36.978896  465613 kubeadm.go:157] found existing configuration files:
	
	I1009 19:37:36.978977  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:37:36.986614  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:37:36.986721  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:37:36.994152  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:37:37.008592  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:37:37.008675  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:37:37.017329  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:37:37.027373  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:37:37.027468  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:37:37.036193  465613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:37:37.044613  465613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:37:37.044703  465613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:37:37.052676  465613 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:37:37.096704  465613 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:37:37.096767  465613 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:37:37.127490  465613 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:37:37.127654  465613 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:37:37.127732  465613 kubeadm.go:318] OS: Linux
	I1009 19:37:37.127808  465613 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:37:37.127886  465613 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:37:37.127970  465613 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:37:37.128085  465613 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:37:37.128167  465613 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:37:37.128286  465613 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:37:37.128348  465613 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:37:37.128416  465613 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:37:37.128472  465613 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:37:37.204197  465613 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:37:37.204315  465613 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:37:37.204422  465613 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:37:37.230626  465613 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:37:34.783191  467248 addons.go:514] duration metric: took 10.909505053s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 19:37:34.790193  467248 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:37:34.791743  467248 api_server.go:141] control plane version: v1.28.0
	I1009 19:37:34.791806  467248 api_server.go:131] duration metric: took 12.680353ms to wait for apiserver health ...
	I1009 19:37:34.791829  467248 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:37:34.797777  467248 system_pods.go:59] 8 kube-system pods found
	I1009 19:37:34.797807  467248 system_pods.go:61] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:37:34.797818  467248 system_pods.go:61] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:37:34.797824  467248 system_pods.go:61] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:37:34.797832  467248 system_pods.go:61] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:37:34.797839  467248 system_pods.go:61] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:37:34.797845  467248 system_pods.go:61] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:37:34.797853  467248 system_pods.go:61] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:37:34.797857  467248 system_pods.go:61] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Running
	I1009 19:37:34.797862  467248 system_pods.go:74] duration metric: took 6.015204ms to wait for pod list to return data ...
	I1009 19:37:34.797869  467248 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:37:34.801526  467248 default_sa.go:45] found service account: "default"
	I1009 19:37:34.801600  467248 default_sa.go:55] duration metric: took 3.724349ms for default service account to be created ...
	I1009 19:37:34.801625  467248 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:37:34.805978  467248 system_pods.go:86] 8 kube-system pods found
	I1009 19:37:34.806059  467248 system_pods.go:89] "coredns-5dd5756b68-ftv2x" [dc6318da-ce5f-4d30-9999-62b2f083b2da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:37:34.806084  467248 system_pods.go:89] "etcd-old-k8s-version-271815" [410e417a-3808-4b54-81f4-ca4e6dc04b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:37:34.806107  467248 system_pods.go:89] "kindnet-t5pvl" [6cb7e417-e089-4f17-b9d6-9eb1ad6d968e] Running
	I1009 19:37:34.806189  467248 system_pods.go:89] "kube-apiserver-old-k8s-version-271815" [d358823b-82e8-4e81-8ba0-4f01275fe48d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:37:34.806218  467248 system_pods.go:89] "kube-controller-manager-old-k8s-version-271815" [d2e23f8b-cffe-4f84-ab0d-5324794b460d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:37:34.806236  467248 system_pods.go:89] "kube-proxy-7j6jw" [f8087fcd-ccb5-438a-8b76-034287b3cd28] Running
	I1009 19:37:34.806272  467248 system_pods.go:89] "kube-scheduler-old-k8s-version-271815" [166c7cee-c35b-48a2-a799-9157538ac799] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:37:34.806300  467248 system_pods.go:89] "storage-provisioner" [f5406654-bb8c-49c3-a7a4-e3a13517e0e2] Running
	I1009 19:37:34.806326  467248 system_pods.go:126] duration metric: took 4.67946ms to wait for k8s-apps to be running ...
	I1009 19:37:34.806359  467248 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:37:34.806447  467248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:37:34.828166  467248 system_svc.go:56] duration metric: took 21.798394ms WaitForService to wait for kubelet
	I1009 19:37:34.828245  467248 kubeadm.go:586] duration metric: took 10.954907653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:37:34.828298  467248 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:37:34.832608  467248 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:37:34.832682  467248 node_conditions.go:123] node cpu capacity is 2
	I1009 19:37:34.832721  467248 node_conditions.go:105] duration metric: took 4.398399ms to run NodePressure ...
	I1009 19:37:34.832752  467248 start.go:241] waiting for startup goroutines ...
	I1009 19:37:34.832775  467248 start.go:246] waiting for cluster config update ...
	I1009 19:37:34.832813  467248 start.go:255] writing updated cluster config ...
	I1009 19:37:34.833150  467248 ssh_runner.go:195] Run: rm -f paused
	I1009 19:37:34.838210  467248 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:37:34.843541  467248 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-ftv2x" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:37:36.850748  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:37.232924  465613 out.go:252]   - Generating certificates and keys ...
	I1009 19:37:37.233025  465613 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:37:37.233113  465613 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:37:37.409623  465613 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:37:37.648129  465613 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:37:38.528993  465613 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:37:39.419929  465613 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1009 19:37:39.352696  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:41.850910  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:43.864583  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:40.183541  465613 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:37:40.183841  465613 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-678119] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:37:40.784626  465613 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:37:40.784942  465613 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-678119] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:37:41.182394  465613 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:37:41.553039  465613 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:37:42.008236  465613 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:37:42.009018  465613 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:37:42.742573  465613 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:37:43.559364  465613 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:37:44.569344  465613 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:37:44.918478  465613 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:37:46.001308  465613 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:37:46.001450  465613 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:37:46.004971  465613 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1009 19:37:46.350025  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:48.354369  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:46.006290  465613 out.go:252]   - Booting up control plane ...
	I1009 19:37:46.006423  465613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:37:46.006510  465613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:37:46.008330  465613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:37:46.034712  465613 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:37:46.035428  465613 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:37:46.054602  465613 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:37:46.054726  465613 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:37:46.054771  465613 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:37:46.225016  465613 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:37:46.225152  465613 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:37:47.725913  465613 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500901627s
	I1009 19:37:47.729609  465613 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:37:47.729709  465613 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 19:37:47.729802  465613 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:37:47.729888  465613 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1009 19:37:50.851734  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:52.852089  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:56.603242  465613 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 8.868746401s
	I1009 19:37:56.969029  465613 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.239430012s
	I1009 19:37:58.731892  465613 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.001977221s
	I1009 19:37:58.757918  465613 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:37:58.775113  465613 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:37:58.789916  465613 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:37:58.790158  465613 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-678119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:37:58.804732  465613 kubeadm.go:318] [bootstrap-token] Using token: bja34r.6kzea7cmbq4vjgav
	W1009 19:37:54.852635  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:37:57.350188  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:37:58.808933  465613 out.go:252]   - Configuring RBAC rules ...
	I1009 19:37:58.809059  465613 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:37:58.813502  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:37:58.824340  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:37:58.829527  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:37:58.836667  465613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:37:58.840835  465613 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:37:59.140158  465613 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:37:59.584462  465613 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:38:00.161335  465613 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:38:00.161366  465613 kubeadm.go:318] 
	I1009 19:38:00.161430  465613 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:38:00.161436  465613 kubeadm.go:318] 
	I1009 19:38:00.161534  465613 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:38:00.161541  465613 kubeadm.go:318] 
	I1009 19:38:00.161568  465613 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:38:00.161630  465613 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:38:00.161685  465613 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:38:00.161690  465613 kubeadm.go:318] 
	I1009 19:38:00.161747  465613 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:38:00.161759  465613 kubeadm.go:318] 
	I1009 19:38:00.161820  465613 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:38:00.161826  465613 kubeadm.go:318] 
	I1009 19:38:00.161880  465613 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:38:00.161960  465613 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:38:00.162032  465613 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:38:00.162037  465613 kubeadm.go:318] 
	I1009 19:38:00.162144  465613 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:38:00.162228  465613 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:38:00.162233  465613 kubeadm.go:318] 
	I1009 19:38:00.162331  465613 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token bja34r.6kzea7cmbq4vjgav \
	I1009 19:38:00.162440  465613 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:38:00.163250  465613 kubeadm.go:318] 	--control-plane 
	I1009 19:38:00.163271  465613 kubeadm.go:318] 
	I1009 19:38:00.163364  465613 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:38:00.163383  465613 kubeadm.go:318] 
	I1009 19:38:00.163477  465613 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token bja34r.6kzea7cmbq4vjgav \
	I1009 19:38:00.163737  465613 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:38:00.203019  465613 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:38:00.205463  465613 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:38:00.205615  465613 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:38:00.205659  465613 cni.go:84] Creating CNI manager for ""
	I1009 19:38:00.205673  465613 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:00.214910  465613 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:38:00.225216  465613 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:38:00.267484  465613 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:38:00.267506  465613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:38:00.303312  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:38:00.738348  465613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:38:00.738498  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:00.738566  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-678119 minikube.k8s.io/updated_at=2025_10_09T19_38_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=no-preload-678119 minikube.k8s.io/primary=true
	I1009 19:38:00.901197  465613 ops.go:34] apiserver oom_adj: -16
	I1009 19:38:00.901278  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:01.401627  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:01.902397  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:02.402227  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:02.901534  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:03.401401  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:03.902243  465613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:38:03.998875  465613 kubeadm.go:1113] duration metric: took 3.260428988s to wait for elevateKubeSystemPrivileges
	I1009 19:38:03.998908  465613 kubeadm.go:402] duration metric: took 27.077205286s to StartCluster
	I1009 19:38:03.998935  465613 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:03.999056  465613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:04.000579  465613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:04.001073  465613 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:04.001703  465613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:38:04.002910  465613 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:04.003064  465613 addons.go:69] Setting storage-provisioner=true in profile "no-preload-678119"
	I1009 19:38:04.003083  465613 addons.go:238] Setting addon storage-provisioner=true in "no-preload-678119"
	I1009 19:38:04.003140  465613 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:04.003367  465613 addons.go:69] Setting default-storageclass=true in profile "no-preload-678119"
	I1009 19:38:04.003394  465613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-678119"
	I1009 19:38:04.003704  465613 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:04.003869  465613 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:04.005133  465613 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:04.006390  465613 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:04.010094  465613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:04.059665  465613 addons.go:238] Setting addon default-storageclass=true in "no-preload-678119"
	I1009 19:38:04.059725  465613 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:04.060484  465613 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:04.061807  465613 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1009 19:37:59.850393  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:38:02.349266  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:38:04.064871  465613 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:04.064896  465613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:04.064960  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:04.088072  465613 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:04.088104  465613 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:04.088182  465613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:04.121473  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:04.132076  465613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:04.293233  465613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:38:04.382534  465613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:04.398737  465613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:04.429985  465613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:05.122496  465613 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 19:38:05.124516  465613 node_ready.go:35] waiting up to 6m0s for node "no-preload-678119" to be "Ready" ...
	I1009 19:38:05.423206  465613 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1009 19:38:04.351648  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:38:06.354556  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	W1009 19:38:08.849808  467248 pod_ready.go:104] pod "coredns-5dd5756b68-ftv2x" is not "Ready", error: <nil>
	I1009 19:38:05.426040  465613 addons.go:514] duration metric: took 1.423124141s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 19:38:05.630202  465613 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-678119" context rescaled to 1 replicas
	W1009 19:38:07.130391  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	W1009 19:38:09.629049  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	I1009 19:38:11.349634  467248 pod_ready.go:94] pod "coredns-5dd5756b68-ftv2x" is "Ready"
	I1009 19:38:11.349667  467248 pod_ready.go:86] duration metric: took 36.506055517s for pod "coredns-5dd5756b68-ftv2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.353029  467248 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.358424  467248 pod_ready.go:94] pod "etcd-old-k8s-version-271815" is "Ready"
	I1009 19:38:11.358451  467248 pod_ready.go:86] duration metric: took 5.400857ms for pod "etcd-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.361447  467248 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.366435  467248 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-271815" is "Ready"
	I1009 19:38:11.366464  467248 pod_ready.go:86] duration metric: took 4.995486ms for pod "kube-apiserver-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.369493  467248 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.547782  467248 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-271815" is "Ready"
	I1009 19:38:11.547856  467248 pod_ready.go:86] duration metric: took 178.325949ms for pod "kube-controller-manager-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:11.748044  467248 pod_ready.go:83] waiting for pod "kube-proxy-7j6jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.147538  467248 pod_ready.go:94] pod "kube-proxy-7j6jw" is "Ready"
	I1009 19:38:12.147565  467248 pod_ready.go:86] duration metric: took 399.493667ms for pod "kube-proxy-7j6jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.347608  467248 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.747106  467248 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-271815" is "Ready"
	I1009 19:38:12.747136  467248 pod_ready.go:86] duration metric: took 399.500233ms for pod "kube-scheduler-old-k8s-version-271815" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:12.747148  467248 pod_ready.go:40] duration metric: took 37.908857911s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:38:12.808149  467248 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1009 19:38:12.811360  467248 out.go:203] 
	W1009 19:38:12.814340  467248 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1009 19:38:12.817264  467248 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1009 19:38:12.820197  467248 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-271815" cluster and "default" namespace by default
	W1009 19:38:12.128866  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	W1009 19:38:14.129491  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	W1009 19:38:16.629348  465613 node_ready.go:57] node "no-preload-678119" has "Ready":"False" status (will retry)
	I1009 19:38:18.632326  465613 node_ready.go:49] node "no-preload-678119" is "Ready"
	I1009 19:38:18.632352  465613 node_ready.go:38] duration metric: took 13.506496399s for node "no-preload-678119" to be "Ready" ...
	I1009 19:38:18.632365  465613 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:38:18.632425  465613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:38:18.650307  465613 api_server.go:72] duration metric: took 14.649195447s to wait for apiserver process to appear ...
	I1009 19:38:18.650334  465613 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:38:18.650354  465613 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:38:18.662701  465613 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:38:18.663700  465613 api_server.go:141] control plane version: v1.34.1
	I1009 19:38:18.663722  465613 api_server.go:131] duration metric: took 13.380679ms to wait for apiserver health ...
	I1009 19:38:18.663731  465613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:38:18.674828  465613 system_pods.go:59] 8 kube-system pods found
	I1009 19:38:18.674861  465613 system_pods.go:61] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending
	I1009 19:38:18.674867  465613 system_pods.go:61] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:18.674872  465613 system_pods.go:61] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:18.674876  465613 system_pods.go:61] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:18.674922  465613 system_pods.go:61] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:18.674934  465613 system_pods.go:61] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:18.674939  465613 system_pods.go:61] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:18.674950  465613 system_pods.go:61] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending
	I1009 19:38:18.674959  465613 system_pods.go:74] duration metric: took 11.221632ms to wait for pod list to return data ...
	I1009 19:38:18.674992  465613 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:38:18.680465  465613 default_sa.go:45] found service account: "default"
	I1009 19:38:18.680489  465613 default_sa.go:55] duration metric: took 5.489547ms for default service account to be created ...
	I1009 19:38:18.680499  465613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:38:18.692260  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:18.692288  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending
	I1009 19:38:18.692311  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:18.692317  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:18.692322  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:18.692326  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:18.692330  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:18.692335  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:18.692351  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:18.692373  465613 retry.go:31] will retry after 216.368352ms: missing components: kube-dns
	I1009 19:38:18.913178  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:18.913213  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:38:18.913219  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:18.913226  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:18.913232  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:18.913237  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:18.913241  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:18.913251  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:18.913261  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:18.913279  465613 retry.go:31] will retry after 384.003219ms: missing components: kube-dns
	I1009 19:38:19.301508  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:19.301552  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:38:19.301559  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:19.301566  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:19.301603  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:19.301609  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:19.301613  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:19.301617  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:19.301622  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:19.301637  465613 retry.go:31] will retry after 296.341327ms: missing components: kube-dns
	I1009 19:38:19.602357  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:19.602395  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:38:19.602403  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:19.602409  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:19.602413  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:19.602422  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:19.602428  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:19.602432  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:19.602439  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:38:19.602459  465613 retry.go:31] will retry after 583.646415ms: missing components: kube-dns
	I1009 19:38:20.189921  465613 system_pods.go:86] 8 kube-system pods found
	I1009 19:38:20.189955  465613 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running
	I1009 19:38:20.189962  465613 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running
	I1009 19:38:20.189967  465613 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:38:20.189971  465613 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running
	I1009 19:38:20.189977  465613 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running
	I1009 19:38:20.189980  465613 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:38:20.189984  465613 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running
	I1009 19:38:20.189989  465613 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:38:20.189996  465613 system_pods.go:126] duration metric: took 1.509491123s to wait for k8s-apps to be running ...
	I1009 19:38:20.190009  465613 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:38:20.190080  465613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:38:20.204066  465613 system_svc.go:56] duration metric: took 14.047297ms WaitForService to wait for kubelet
	I1009 19:38:20.204094  465613 kubeadm.go:586] duration metric: took 16.202988581s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:20.204114  465613 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:38:20.206896  465613 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:38:20.206925  465613 node_conditions.go:123] node cpu capacity is 2
	I1009 19:38:20.206937  465613 node_conditions.go:105] duration metric: took 2.817197ms to run NodePressure ...
	I1009 19:38:20.206950  465613 start.go:241] waiting for startup goroutines ...
	I1009 19:38:20.206957  465613 start.go:246] waiting for cluster config update ...
	I1009 19:38:20.206968  465613 start.go:255] writing updated cluster config ...
	I1009 19:38:20.207265  465613 ssh_runner.go:195] Run: rm -f paused
	I1009 19:38:20.211300  465613 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:38:20.215419  465613 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.220732  465613 pod_ready.go:94] pod "coredns-66bc5c9577-cfmf8" is "Ready"
	I1009 19:38:20.220768  465613 pod_ready.go:86] duration metric: took 5.315752ms for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.223245  465613 pod_ready.go:83] waiting for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.228259  465613 pod_ready.go:94] pod "etcd-no-preload-678119" is "Ready"
	I1009 19:38:20.228287  465613 pod_ready.go:86] duration metric: took 5.014399ms for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.234674  465613 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.241658  465613 pod_ready.go:94] pod "kube-apiserver-no-preload-678119" is "Ready"
	I1009 19:38:20.241715  465613 pod_ready.go:86] duration metric: took 7.011582ms for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.244541  465613 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.615834  465613 pod_ready.go:94] pod "kube-controller-manager-no-preload-678119" is "Ready"
	I1009 19:38:20.615863  465613 pod_ready.go:86] duration metric: took 371.295393ms for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:20.815962  465613 pod_ready.go:83] waiting for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.216026  465613 pod_ready.go:94] pod "kube-proxy-cf6gt" is "Ready"
	I1009 19:38:21.216052  465613 pod_ready.go:86] duration metric: took 400.060706ms for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.415362  465613 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.815994  465613 pod_ready.go:94] pod "kube-scheduler-no-preload-678119" is "Ready"
	I1009 19:38:21.816027  465613 pod_ready.go:86] duration metric: took 400.63664ms for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:38:21.816053  465613 pod_ready.go:40] duration metric: took 1.604707728s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:38:21.882501  465613 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:38:21.890436  465613 out.go:179] * Done! kubectl is now configured to use "no-preload-678119" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.948555666Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.952487943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.952521806Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.952544633Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.955644065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.955676697Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.955700057Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.96024433Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.960279932Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.960304105Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.963619128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:38:11 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:11.963807233Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.796251802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=756512cf-31de-4c54-91b5-9617ad50c1f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.797868936Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=47b5af2b-182c-4f27-b170-fa32baacfb5a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.79915732Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper" id=071cc4f3-71f3-4f96-88a9-511e9bda1aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.799435042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.806414731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.806948185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.827620729Z" level=info msg="Created container 6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper" id=071cc4f3-71f3-4f96-88a9-511e9bda1aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.829726229Z" level=info msg="Starting container: 6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be" id=15b0c75e-6f1a-4836-bf07-66bc84917252 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:38:18 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:18.831787389Z" level=info msg="Started container" PID=1712 containerID=6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper id=15b0c75e-6f1a-4836-bf07-66bc84917252 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3
	Oct 09 19:38:18 old-k8s-version-271815 conmon[1710]: conmon 6cfc8e3ac66b23ced830 <ninfo>: container 1712 exited with status 1
	Oct 09 19:38:19 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:19.187318771Z" level=info msg="Removing container: b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3" id=2f3a2912-87b9-4e02-a710-856a41a57219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:38:19 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:19.195851396Z" level=info msg="Error loading conmon cgroup of container b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3: cgroup deleted" id=2f3a2912-87b9-4e02-a710-856a41a57219 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:38:19 old-k8s-version-271815 crio[650]: time="2025-10-09T19:38:19.19962245Z" level=info msg="Removed container b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls/dashboard-metrics-scraper" id=2f3a2912-87b9-4e02-a710-856a41a57219 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	6cfc8e3ac66b2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   2                   5b7d07859200a       dashboard-metrics-scraper-5f989dc9cf-9vwls       kubernetes-dashboard
	e3f12c8476c7f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   c90803e5fe33c       storage-provisioner                              kube-system
	2d63418e6ce2f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   d48535f523137       kubernetes-dashboard-8694d4445c-h9ccf            kubernetes-dashboard
	10744f141a4b0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           58 seconds ago       Running             coredns                     1                   eb8686b392d69       coredns-5dd5756b68-ftv2x                         kube-system
	a229c68dff091       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   c90803e5fe33c       storage-provisioner                              kube-system
	58a8b3ce524be       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   745cf6f9e82b2       busybox                                          default
	bac73e4710084       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   f6d98f0435604       kindnet-t5pvl                                    kube-system
	ab150203b138e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   d9125e6c54642       kube-proxy-7j6jw                                 kube-system
	269bca5e10b87       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   f263a1a241982       kube-controller-manager-old-k8s-version-271815   kube-system
	f7644ea5932c5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   4501fdeb4d781       kube-apiserver-old-k8s-version-271815            kube-system
	e3161747cdf01       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   f7bbeb46f8786       etcd-old-k8s-version-271815                      kube-system
	5225c215f7ddf       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   103ff185f1acb       kube-scheduler-old-k8s-version-271815            kube-system
	
	
	==> coredns [10744f141a4b0dfd34d28c3a32335c0845b684257b88d74b758a7fe58035975e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34620 - 4392 "HINFO IN 4559633695322455595.7631379097162551978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021807288s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-271815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-271815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=old-k8s-version-271815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_36_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:36:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-271815
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:38:01 +0000   Thu, 09 Oct 2025 19:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-271815
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5aa5c1fd2e642859d5aa3878c95a1e2
	  System UUID:                1963e6d2-e326-4444-bd99-5534a70044a9
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-5dd5756b68-ftv2x                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     119s
	  kube-system                 etcd-old-k8s-version-271815                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-t5pvl                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-old-k8s-version-271815             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-old-k8s-version-271815    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-7j6jw                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-old-k8s-version-271815             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-9vwls        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-h9ccf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 118s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s              kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s              kubelet          Node old-k8s-version-271815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s              kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m                 node-controller  Node old-k8s-version-271815 event: Registered Node old-k8s-version-271815 in Controller
	  Normal  NodeReady                105s               kubelet          Node old-k8s-version-271815 status is now: NodeReady
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node old-k8s-version-271815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node old-k8s-version-271815 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node old-k8s-version-271815 event: Registered Node old-k8s-version-271815 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3161747cdf0118272a93fab2eee3081718d8e3f73492036c9219649d6fbe93f] <==
	{"level":"info","ts":"2025-10-09T19:37:23.860143Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-09T19:37:23.860235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:37:23.860261Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T19:37:23.906497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:37:23.906562Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:37:23.906571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:37:23.938999Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-09T19:37:23.941543Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:37:23.941631Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T19:37:23.943049Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T19:37:23.943141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T19:37:25.36217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-09T19:37:25.362283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-09T19:37:25.362324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-09T19:37:25.362371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.362407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.36245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.362482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-09T19:37:25.368506Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-271815 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T19:37:25.368604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:37:25.369593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T19:37:25.373688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:37:25.374778Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-09T19:37:25.390527Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T19:37:25.390607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:38:30 up  2:21,  0 user,  load average: 4.39, 2.82, 2.22
	Linux old-k8s-version-271815 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bac73e47100848955f3f3f4f9b77a47feeb98dc2c3bf4b8a567178090f45a220] <==
	I1009 19:37:31.714508       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:37:31.722288       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:37:31.722538       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:37:31.722585       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:37:31.722623       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:37:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:37:31.939245       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:37:31.939315       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:37:31.939348       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:37:31.943126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:38:01.940899       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:38:01.943469       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:38:01.943673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:38:01.943751       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 19:38:03.543433       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:38:03.543463       1 metrics.go:72] Registering metrics
	I1009 19:38:03.543537       1 controller.go:711] "Syncing nftables rules"
	I1009 19:38:11.939686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:38:11.940838       1 main.go:301] handling current node
	I1009 19:38:21.939680       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:38:21.939777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f7644ea5932c5e3b7208518f0c96c2d54106c86cb9d11050d55518f2ed9bac0d] <==
	I1009 19:37:30.468312       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 19:37:30.471532       1 aggregator.go:166] initial CRD sync complete...
	I1009 19:37:30.471557       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 19:37:30.471564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:37:30.471571       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:37:30.485884       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 19:37:30.485928       1 shared_informer.go:318] Caches are synced for configmaps
	I1009 19:37:30.512956       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1009 19:37:30.543172       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:37:30.952433       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:37:34.425757       1 controller.go:624] quota admission added evaluator for: namespaces
	I1009 19:37:34.548576       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 19:37:34.620054       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:37:34.639488       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:37:34.657775       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 19:37:34.756057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.137.177"}
	I1009 19:37:34.771314       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.234.247"}
	E1009 19:37:40.462434       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1009 19:37:43.353286       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:37:43.368291       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1009 19:37:43.390918       1 controller.go:624] quota admission added evaluator for: endpoints
	E1009 19:37:50.463087       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1009 19:38:00.463936       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1009 19:38:10.464568       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1009 19:38:20.465254       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [269bca5e10b8713a319b82f53713720fe83bba918a22af6746851588062867f7] <==
	I1009 19:37:43.399052       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1009 19:37:43.423946       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 19:37:43.439787       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-9vwls"
	I1009 19:37:43.440310       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-h9ccf"
	I1009 19:37:43.450283       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 19:37:43.473970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.270917ms"
	I1009 19:37:43.474428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.315337ms"
	I1009 19:37:43.545410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.312813ms"
	I1009 19:37:43.560300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.794598ms"
	I1009 19:37:43.560903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.215µs"
	I1009 19:37:43.581312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.785199ms"
	I1009 19:37:43.581500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.579µs"
	I1009 19:37:43.585041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.534µs"
	I1009 19:37:43.798303       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 19:37:43.798350       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 19:37:43.807586       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 19:37:52.113290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.041078ms"
	I1009 19:37:52.113367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.83µs"
	I1009 19:37:59.119850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.849µs"
	I1009 19:38:00.252571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.025µs"
	I1009 19:38:01.148266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.027µs"
	I1009 19:38:10.984511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.900909ms"
	I1009 19:38:10.984773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.22µs"
	I1009 19:38:19.208763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.835µs"
	I1009 19:38:23.801965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.768µs"
	
	
	==> kube-proxy [ab150203b138e1f09f9d40149a76bfda555618d4861f5b1ecee8f410e751492a] <==
	I1009 19:37:32.349682       1 server_others.go:69] "Using iptables proxy"
	I1009 19:37:32.663658       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1009 19:37:33.176701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:37:33.181232       1 server_others.go:152] "Using iptables Proxier"
	I1009 19:37:33.181343       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 19:37:33.181377       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 19:37:33.181434       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 19:37:33.181772       1 server.go:846] "Version info" version="v1.28.0"
	I1009 19:37:33.182028       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:37:33.183378       1 config.go:188] "Starting service config controller"
	I1009 19:37:33.183460       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 19:37:33.183504       1 config.go:97] "Starting endpoint slice config controller"
	I1009 19:37:33.183530       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 19:37:33.184231       1 config.go:315] "Starting node config controller"
	I1009 19:37:33.184527       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 19:37:33.285851       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 19:37:33.285908       1 shared_informer.go:318] Caches are synced for service config
	I1009 19:37:33.286255       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5225c215f7ddf7ce082f0615c771a305c50067a07bcd3df6f45b03ea91f2b24c] <==
	I1009 19:37:28.591420       1 serving.go:348] Generated self-signed cert in-memory
	I1009 19:37:31.882853       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1009 19:37:31.882963       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:37:31.905231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1009 19:37:31.905520       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1009 19:37:31.905542       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1009 19:37:31.905565       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1009 19:37:31.926778       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:37:31.926810       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 19:37:31.926838       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:37:31.926844       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 19:37:32.010414       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1009 19:37:32.027732       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 19:37:32.027808       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: I1009 19:37:43.623864     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdnkv\" (UniqueName: \"kubernetes.io/projected/005602fb-94aa-46b2-94ef-5bb2d79d974f-kube-api-access-mdnkv\") pod \"kubernetes-dashboard-8694d4445c-h9ccf\" (UID: \"005602fb-94aa-46b2-94ef-5bb2d79d974f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-h9ccf"
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: I1009 19:37:43.623982     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9rl\" (UniqueName: \"kubernetes.io/projected/9ef4de73-8be0-4e7a-b14b-b0000e7a60b8-kube-api-access-tb9rl\") pod \"dashboard-metrics-scraper-5f989dc9cf-9vwls\" (UID: \"9ef4de73-8be0-4e7a-b14b-b0000e7a60b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls"
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: I1009 19:37:43.624096     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9ef4de73-8be0-4e7a-b14b-b0000e7a60b8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-9vwls\" (UID: \"9ef4de73-8be0-4e7a-b14b-b0000e7a60b8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls"
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: W1009 19:37:43.879778     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-d48535f523137a09df2204ae0a17bc675b5442adfd77deb236694c7237bff103 WatchSource:0}: Error finding container d48535f523137a09df2204ae0a17bc675b5442adfd77deb236694c7237bff103: Status 404 returned error can't find the container with id d48535f523137a09df2204ae0a17bc675b5442adfd77deb236694c7237bff103
	Oct 09 19:37:43 old-k8s-version-271815 kubelet[779]: W1009 19:37:43.914415     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/395bb50f3c39a99685bc855adbedc7b9bf05e4f6acfbf802583b4a1a0c26e980/crio-5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3 WatchSource:0}: Error finding container 5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3: Status 404 returned error can't find the container with id 5b7d07859200a3186af6679ae61a49953c7a1aeaecbd8ba0e759c6e0898341d3
	Oct 09 19:37:52 old-k8s-version-271815 kubelet[779]: I1009 19:37:52.087775     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-h9ccf" podStartSLOduration=1.833283271 podCreationTimestamp="2025-10-09 19:37:43 +0000 UTC" firstStartedPulling="2025-10-09 19:37:43.886757991 +0000 UTC m=+21.352174372" lastFinishedPulling="2025-10-09 19:37:51.140444997 +0000 UTC m=+28.605861378" observedRunningTime="2025-10-09 19:37:52.086553952 +0000 UTC m=+29.551970341" watchObservedRunningTime="2025-10-09 19:37:52.086970277 +0000 UTC m=+29.552386666"
	Oct 09 19:37:59 old-k8s-version-271815 kubelet[779]: I1009 19:37:59.090536     779 scope.go:117] "RemoveContainer" containerID="839f52f92a9c3fa928a49e920789f702d9f46086b7f1cae1d9258b0a95f36a9c"
	Oct 09 19:38:00 old-k8s-version-271815 kubelet[779]: I1009 19:38:00.129721     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:00 old-k8s-version-271815 kubelet[779]: I1009 19:38:00.130905     779 scope.go:117] "RemoveContainer" containerID="839f52f92a9c3fa928a49e920789f702d9f46086b7f1cae1d9258b0a95f36a9c"
	Oct 09 19:38:00 old-k8s-version-271815 kubelet[779]: E1009 19:38:00.135162     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:01 old-k8s-version-271815 kubelet[779]: I1009 19:38:01.133454     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:01 old-k8s-version-271815 kubelet[779]: E1009 19:38:01.133753     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:02 old-k8s-version-271815 kubelet[779]: I1009 19:38:02.137612     779 scope.go:117] "RemoveContainer" containerID="a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029"
	Oct 09 19:38:03 old-k8s-version-271815 kubelet[779]: I1009 19:38:03.783186     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:03 old-k8s-version-271815 kubelet[779]: E1009 19:38:03.783637     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:18 old-k8s-version-271815 kubelet[779]: I1009 19:38:18.795241     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:19 old-k8s-version-271815 kubelet[779]: I1009 19:38:19.182841     779 scope.go:117] "RemoveContainer" containerID="b8433afa2f22e7eaee94c42394b4fc92267c83a64dd912cb96d0b166046ab5a3"
	Oct 09 19:38:19 old-k8s-version-271815 kubelet[779]: I1009 19:38:19.183152     779 scope.go:117] "RemoveContainer" containerID="6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	Oct 09 19:38:19 old-k8s-version-271815 kubelet[779]: E1009 19:38:19.183467     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:23 old-k8s-version-271815 kubelet[779]: I1009 19:38:23.783195     779 scope.go:117] "RemoveContainer" containerID="6cfc8e3ac66b23ced83032e3c49defb1dcb543e53fe4d600ecfb1087c1bc54be"
	Oct 09 19:38:23 old-k8s-version-271815 kubelet[779]: E1009 19:38:23.784000     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9vwls_kubernetes-dashboard(9ef4de73-8be0-4e7a-b14b-b0000e7a60b8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9vwls" podUID="9ef4de73-8be0-4e7a-b14b-b0000e7a60b8"
	Oct 09 19:38:25 old-k8s-version-271815 kubelet[779]: I1009 19:38:25.054215     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 09 19:38:25 old-k8s-version-271815 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:38:25 old-k8s-version-271815 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:38:25 old-k8s-version-271815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2d63418e6ce2f77eca69e74ff9ea7e78acc2dc61f5289982a96c1ac9c78d7392] <==
	2025/10/09 19:37:51 Starting overwatch
	2025/10/09 19:37:51 Using namespace: kubernetes-dashboard
	2025/10/09 19:37:51 Using in-cluster config to connect to apiserver
	2025/10/09 19:37:51 Using secret token for csrf signing
	2025/10/09 19:37:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:37:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:37:51 Successful initial request to the apiserver, version: v1.28.0
	2025/10/09 19:37:51 Generating JWE encryption key
	2025/10/09 19:37:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:37:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:37:51 Initializing JWE encryption key from synchronized object
	2025/10/09 19:37:51 Creating in-cluster Sidecar client
	2025/10/09 19:37:51 Serving insecurely on HTTP port: 9090
	2025/10/09 19:37:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:38:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a229c68dff09136aa0c245ad345e4da9e4d2661e0f36d75399c3187ca7dad029] <==
	I1009 19:37:31.906205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:38:02.010986       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e3f12c8476c7f675ccdaedbeec3d39577c01787f3e043afaf02faad8eef8a730] <==
	I1009 19:38:02.207604       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:38:02.230310       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:38:02.230371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 19:38:19.627051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:38:19.627741       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-271815_b7b4dc8e-2340-4252-a763-ca31f3ea6d7a!
	I1009 19:38:19.627566       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89b8e788-298a-4512-8566-f2088b6d05b0", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-271815_b7b4dc8e-2340-4252-a763-ca31f3ea6d7a became leader
	I1009 19:38:19.728655       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-271815_b7b4dc8e-2340-4252-a763-ca31f3ea6d7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-271815 -n old-k8s-version-271815
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-271815 -n old-k8s-version-271815: exit status 2 (381.974841ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-271815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (296.430546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-678119 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-678119 describe deploy/metrics-server -n kube-system: exit status 1 (118.525162ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-678119 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-678119
helpers_test.go:243: (dbg) docker inspect no-preload-678119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198",
	        "Created": "2025-10-09T19:37:06.160258648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465922,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:37:06.260735054Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/hosts",
	        "LogPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198-json.log",
	        "Name": "/no-preload-678119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-678119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-678119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198",
	                "LowerDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-678119",
	                "Source": "/var/lib/docker/volumes/no-preload-678119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-678119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-678119",
	                "name.minikube.sigs.k8s.io": "no-preload-678119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fb91d587896c698951fe667061335268e23132e2f1d58c14ac5203d79db709d",
	            "SandboxKey": "/var/run/docker/netns/5fb91d587896",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-678119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:ad:6b:da:73:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5323b5d2b808ea7e86b28785565321bc6d429621f1f5c630eb2a054cf03b7389",
	                    "EndpointID": "c69f16c1dd7da1a36707f6217362ff06571523c27c8b2eef36a72e70537155fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-678119",
	                        "2e3aac5c1c11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-678119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-678119 logs -n 25: (1.575766867s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-224541 sudo crio config                                                                                                                                                                                                             │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ delete  │ -p cilium-224541                                                                                                                                                                                                                              │ cilium-224541             │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ start   │ -p force-systemd-env-028248 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ ssh     │ force-systemd-flag-476949 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-flag-476949                                                                                                                                                                                                                  │ force-systemd-flag-476949 │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248  │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220       │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │ 09 Oct 25 19:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-271815 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119         │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119         │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815    │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:34.303072  472674 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:34.303295  472674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:34.303307  472674 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:34.303313  472674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:34.303587  472674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:38:34.304035  472674 out.go:368] Setting JSON to false
	I1009 19:38:34.305013  472674 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8466,"bootTime":1760030249,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:38:34.305086  472674 start.go:141] virtualization:  
	I1009 19:38:34.308992  472674 out.go:179] * [embed-certs-779570] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:38:34.312320  472674 notify.go:220] Checking for updates...
	I1009 19:38:34.312925  472674 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:38:34.316086  472674 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:34.318837  472674 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:34.321841  472674 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:38:34.324924  472674 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:38:34.327841  472674 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 09 19:38:19 no-preload-678119 crio[836]: time="2025-10-09T19:38:19.077775765Z" level=info msg="Created container fdd28c1c0da85cdb4021ef4f9378e22261a1cbb79d691463e8b3fb52b737d041: kube-system/coredns-66bc5c9577-cfmf8/coredns" id=0e414152-cbcb-4543-b489-1b910bfa5b8e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:19 no-preload-678119 crio[836]: time="2025-10-09T19:38:19.079026249Z" level=info msg="Starting container: fdd28c1c0da85cdb4021ef4f9378e22261a1cbb79d691463e8b3fb52b737d041" id=018bc29c-b861-488b-bf4d-be88858f5170 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:38:19 no-preload-678119 crio[836]: time="2025-10-09T19:38:19.081452672Z" level=info msg="Started container" PID=2504 containerID=fdd28c1c0da85cdb4021ef4f9378e22261a1cbb79d691463e8b3fb52b737d041 description=kube-system/coredns-66bc5c9577-cfmf8/coredns id=018bc29c-b861-488b-bf4d-be88858f5170 name=/runtime.v1.RuntimeService/StartContainer sandboxID=27c3f8dd1926c4c71bf398f9411829afdb62bd9f0c9bd80e4a3ddd9977d6eb4a
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.954663579Z" level=info msg="Running pod sandbox: default/busybox/POD" id=15411ac9-ce1b-445c-adda-d6fcedb29838 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.954730459Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.962517599Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73 UID:5cf9de21-70e1-4070-8c67-80a49ebe678c NetNS:/var/run/netns/57b1e50d-c0a6-4bc3-a2ab-cf4950cee520 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015b6858}] Aliases:map[]}"
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.96255728Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.971530853Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73 UID:5cf9de21-70e1-4070-8c67-80a49ebe678c NetNS:/var/run/netns/57b1e50d-c0a6-4bc3-a2ab-cf4950cee520 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015b6858}] Aliases:map[]}"
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.971681173Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.978104695Z" level=info msg="Ran pod sandbox a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73 with infra container: default/busybox/POD" id=15411ac9-ce1b-445c-adda-d6fcedb29838 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.979290703Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0d34aa90-e301-42fa-b502-11acb6deba36 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.97940734Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0d34aa90-e301-42fa-b502-11acb6deba36 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.979445863Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0d34aa90-e301-42fa-b502-11acb6deba36 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.981667812Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8ee0383c-b176-4b3a-85a7-3748adb4e9b4 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:38:23 no-preload-678119 crio[836]: time="2025-10-09T19:38:23.98382517Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.894493136Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8ee0383c-b176-4b3a-85a7-3748adb4e9b4 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.895135809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c93a152-15fb-4e04-b845-0ab3fc877128 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.898227389Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5cc5ada2-e969-46d3-a11b-a6584d074528 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.906710167Z" level=info msg="Creating container: default/busybox/busybox" id=c505dd05-5b17-411e-b3d0-ad4b84c7c939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.907494775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.912033501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.912547402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.926979762Z" level=info msg="Created container e9b9723c7b1156ab56cd8507a7e266f813c1d337d6d42384c1f0359741b5ca53: default/busybox/busybox" id=c505dd05-5b17-411e-b3d0-ad4b84c7c939 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.928307752Z" level=info msg="Starting container: e9b9723c7b1156ab56cd8507a7e266f813c1d337d6d42384c1f0359741b5ca53" id=0d43a62e-73af-4253-bdd7-db69675ed83c name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:38:25 no-preload-678119 crio[836]: time="2025-10-09T19:38:25.93031438Z" level=info msg="Started container" PID=2554 containerID=e9b9723c7b1156ab56cd8507a7e266f813c1d337d6d42384c1f0359741b5ca53 description=default/busybox/busybox id=0d43a62e-73af-4253-bdd7-db69675ed83c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e9b9723c7b115       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   a1f6a7c2e049b       busybox                                     default
	fdd28c1c0da85       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago      Running             coredns                   0                   27c3f8dd1926c       coredns-66bc5c9577-cfmf8                    kube-system
	53bd81b08220a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      16 seconds ago      Running             storage-provisioner       0                   d7a89434b87c3       storage-provisioner                         kube-system
	a7a6227516c6f       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    27 seconds ago      Running             kindnet-cni               0                   877f5c0900d2f       kindnet-rg6kc                               kube-system
	49c4906d0d513       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   5f113e7ebfd40       kube-proxy-cf6gt                            kube-system
	a3d796d044dfe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   bd86a56435b57       kube-scheduler-no-preload-678119            kube-system
	7e62d856b3f6e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      47 seconds ago      Running             etcd                      0                   be3fd3132649b       etcd-no-preload-678119                      kube-system
	82804bb6bd52d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      47 seconds ago      Running             kube-controller-manager   0                   967805c3a523f       kube-controller-manager-no-preload-678119   kube-system
	3e7d91d75dbb2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      47 seconds ago      Running             kube-apiserver            0                   ce50466504c62       kube-apiserver-no-preload-678119            kube-system
	
	
	==> coredns [fdd28c1c0da85cdb4021ef4f9378e22261a1cbb79d691463e8b3fb52b737d041] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42199 - 47526 "HINFO IN 496296163013958397.8723925094742907382. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013617967s
	
	
	==> describe nodes <==
	Name:               no-preload-678119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-678119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=no-preload-678119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_38_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-678119
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:38:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:38:30 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:38:30 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:38:30 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:38:30 +0000   Thu, 09 Oct 2025 19:38:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-678119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 af7d5924a05745599121f711e82d7a14
	  System UUID:                b33fed70-8b70-482e-bac9-78dc101bc1cd
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-cfmf8                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-678119                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-rg6kc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-678119             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-678119    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-cf6gt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-678119             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-678119 event: Registered Node no-preload-678119 in Controller
	  Normal   NodeReady                17s                kubelet          Node no-preload-678119 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7e62d856b3f6ed374b6244c93ad0c6d1e534c3dd27b7f4da1c1665a9ecd456e3] <==
	{"level":"warn","ts":"2025-10-09T19:37:52.827138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:52.866020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:52.907187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:52.942585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.007024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.023417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.067174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.142673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.192992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.193109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.229153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.283346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.330830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.374262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.423579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.493265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.499605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.559768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.579826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.628473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.689602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.764809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.817817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:53.827988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:37:54.020034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49576","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:38:35 up  2:21,  0 user,  load average: 4.28, 2.83, 2.23
	Linux no-preload-678119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a7a6227516c6f090c171bc8ce7711469998b79917d7ec5bd8048264b6440f5be] <==
	I1009 19:38:07.919708       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:38:07.920039       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 19:38:07.920178       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:38:07.920189       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:38:07.920203       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:38:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:38:08.122290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:38:08.122333       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:38:08.122342       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:38:08.214535       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 19:38:08.416044       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:38:08.416137       1 metrics.go:72] Registering metrics
	I1009 19:38:08.416226       1 controller.go:711] "Syncing nftables rules"
	I1009 19:38:18.128182       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:38:18.128316       1 main.go:301] handling current node
	I1009 19:38:28.122745       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:38:28.122789       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e7d91d75dbb26515583cedfe003267be7c6f509bf9dc6072240f7daf9b598b7] <==
	E1009 19:37:56.603942       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1009 19:37:56.612397       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:37:56.621394       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 19:37:56.667439       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:37:56.675710       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:37:56.675736       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 19:37:56.750035       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:37:56.829549       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 19:37:56.890478       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:37:56.890506       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:37:58.284989       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:37:58.393634       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:37:58.494184       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:37:58.503040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1009 19:37:58.504359       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:37:58.510211       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:37:58.630757       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:37:59.547499       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:37:59.573373       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:37:59.595917       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:38:04.287284       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:38:04.480393       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 19:38:04.690232       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:38:04.757786       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1009 19:38:33.322794       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:51616: use of closed network connection
	
	
	==> kube-controller-manager [82804bb6bd52d0da930f42f6220b63ffcbad65ce1b7c49b473772a0244f50f20] <==
	I1009 19:38:03.650747       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:38:03.650782       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:38:03.652273       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:38:03.652904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:38:03.664185       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:38:03.668606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:38:03.668722       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:38:03.668756       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:38:03.668786       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:38:03.668876       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:38:03.669739       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:38:03.669882       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:38:03.671037       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:38:03.671140       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:38:03.673121       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:38:03.671223       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:38:03.675377       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:38:03.675716       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:38:03.671202       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:38:03.676112       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:38:03.676608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-678119"
	I1009 19:38:03.676685       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 19:38:03.687365       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:38:03.696928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:38:18.679875       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [49c4906d0d513776306e1068806261d8ef4d59affdc629cb77334988e7f976b4] <==
	I1009 19:38:05.656455       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:38:05.915320       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:38:06.020915       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:38:06.020961       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 19:38:06.021058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:38:06.121368       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:38:06.121439       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:38:06.148213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:38:06.166285       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:38:06.166333       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:38:06.169107       1 config.go:200] "Starting service config controller"
	I1009 19:38:06.169122       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:38:06.169485       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:38:06.169495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:38:06.169509       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:38:06.169513       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:38:06.174623       1 config.go:309] "Starting node config controller"
	I1009 19:38:06.174644       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:38:06.174654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:38:06.270197       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:38:06.270456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:38:06.270477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a3d796d044dfe5d721a37c6801288996fece2c0bf3ceb102774b99f6b665e2e1] <==
	E1009 19:37:56.939820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:37:56.939895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:37:56.939940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:37:56.940012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:37:56.940086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 19:37:56.940143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:37:56.940180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:37:56.940219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:37:56.940268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:37:56.940366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:37:56.945763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:37:56.946296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:37:56.946401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:37:56.946518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:37:56.955825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:37:56.956126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:37:56.956193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:37:57.776776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:37:57.803337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:37:57.850518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:37:57.892846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:37:57.894385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:37:57.907845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:37:57.934680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1009 19:38:00.122881       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.667159    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9ae95d69-1114-460b-a01e-1863c278cf3c-cni-cfg\") pod \"kindnet-rg6kc\" (UID: \"9ae95d69-1114-460b-a01e-1863c278cf3c\") " pod="kube-system/kindnet-rg6kc"
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.667201    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ae95d69-1114-460b-a01e-1863c278cf3c-xtables-lock\") pod \"kindnet-rg6kc\" (UID: \"9ae95d69-1114-460b-a01e-1863c278cf3c\") " pod="kube-system/kindnet-rg6kc"
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.667229    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ae95d69-1114-460b-a01e-1863c278cf3c-lib-modules\") pod \"kindnet-rg6kc\" (UID: \"9ae95d69-1114-460b-a01e-1863c278cf3c\") " pod="kube-system/kindnet-rg6kc"
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.667254    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gllm5\" (UniqueName: \"kubernetes.io/projected/9ae95d69-1114-460b-a01e-1863c278cf3c-kube-api-access-gllm5\") pod \"kindnet-rg6kc\" (UID: \"9ae95d69-1114-460b-a01e-1863c278cf3c\") " pod="kube-system/kindnet-rg6kc"
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.667308    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0bafa31-2149-4367-9807-708bd7b12e76-kube-proxy\") pod \"kube-proxy-cf6gt\" (UID: \"f0bafa31-2149-4367-9807-708bd7b12e76\") " pod="kube-system/kube-proxy-cf6gt"
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.667325    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0bafa31-2149-4367-9807-708bd7b12e76-xtables-lock\") pod \"kube-proxy-cf6gt\" (UID: \"f0bafa31-2149-4367-9807-708bd7b12e76\") " pod="kube-system/kube-proxy-cf6gt"
	Oct 09 19:38:04 no-preload-678119 kubelet[2028]: I1009 19:38:04.857331    2028 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:38:05 no-preload-678119 kubelet[2028]: W1009 19:38:05.177033    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/crio-5f113e7ebfd40531e33c75512d5b30fbad5c4e04053f46802c5eb6cdb67cb166 WatchSource:0}: Error finding container 5f113e7ebfd40531e33c75512d5b30fbad5c4e04053f46802c5eb6cdb67cb166: Status 404 returned error can't find the container with id 5f113e7ebfd40531e33c75512d5b30fbad5c4e04053f46802c5eb6cdb67cb166
	Oct 09 19:38:05 no-preload-678119 kubelet[2028]: I1009 19:38:05.702895    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cf6gt" podStartSLOduration=1.702878297 podStartE2EDuration="1.702878297s" podCreationTimestamp="2025-10-09 19:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:38:05.7022793 +0000 UTC m=+6.341724261" watchObservedRunningTime="2025-10-09 19:38:05.702878297 +0000 UTC m=+6.342323258"
	Oct 09 19:38:10 no-preload-678119 kubelet[2028]: I1009 19:38:10.234557    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rg6kc" podStartSLOduration=3.363297415 podStartE2EDuration="6.234500434s" podCreationTimestamp="2025-10-09 19:38:04 +0000 UTC" firstStartedPulling="2025-10-09 19:38:04.912692094 +0000 UTC m=+5.552137047" lastFinishedPulling="2025-10-09 19:38:07.783895113 +0000 UTC m=+8.423340066" observedRunningTime="2025-10-09 19:38:08.685729491 +0000 UTC m=+9.325174477" watchObservedRunningTime="2025-10-09 19:38:10.234500434 +0000 UTC m=+10.873945387"
	Oct 09 19:38:18 no-preload-678119 kubelet[2028]: I1009 19:38:18.608003    2028 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 09 19:38:18 no-preload-678119 kubelet[2028]: I1009 19:38:18.780633    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspf2\" (UniqueName: \"kubernetes.io/projected/6a7f4651-d02b-4b66-a8cb-12a333967e17-kube-api-access-wspf2\") pod \"storage-provisioner\" (UID: \"6a7f4651-d02b-4b66-a8cb-12a333967e17\") " pod="kube-system/storage-provisioner"
	Oct 09 19:38:18 no-preload-678119 kubelet[2028]: I1009 19:38:18.780689    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54b7f29f-4a97-4b36-8523-cada8e102815-config-volume\") pod \"coredns-66bc5c9577-cfmf8\" (UID: \"54b7f29f-4a97-4b36-8523-cada8e102815\") " pod="kube-system/coredns-66bc5c9577-cfmf8"
	Oct 09 19:38:18 no-preload-678119 kubelet[2028]: I1009 19:38:18.780715    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6a7f4651-d02b-4b66-a8cb-12a333967e17-tmp\") pod \"storage-provisioner\" (UID: \"6a7f4651-d02b-4b66-a8cb-12a333967e17\") " pod="kube-system/storage-provisioner"
	Oct 09 19:38:18 no-preload-678119 kubelet[2028]: I1009 19:38:18.780737    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvg9\" (UniqueName: \"kubernetes.io/projected/54b7f29f-4a97-4b36-8523-cada8e102815-kube-api-access-wtvg9\") pod \"coredns-66bc5c9577-cfmf8\" (UID: \"54b7f29f-4a97-4b36-8523-cada8e102815\") " pod="kube-system/coredns-66bc5c9577-cfmf8"
	Oct 09 19:38:19 no-preload-678119 kubelet[2028]: W1009 19:38:19.010604    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/crio-27c3f8dd1926c4c71bf398f9411829afdb62bd9f0c9bd80e4a3ddd9977d6eb4a WatchSource:0}: Error finding container 27c3f8dd1926c4c71bf398f9411829afdb62bd9f0c9bd80e4a3ddd9977d6eb4a: Status 404 returned error can't find the container with id 27c3f8dd1926c4c71bf398f9411829afdb62bd9f0c9bd80e4a3ddd9977d6eb4a
	Oct 09 19:38:19 no-preload-678119 kubelet[2028]: I1009 19:38:19.715252    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.715230969 podStartE2EDuration="14.715230969s" podCreationTimestamp="2025-10-09 19:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:38:19.714475112 +0000 UTC m=+20.353920073" watchObservedRunningTime="2025-10-09 19:38:19.715230969 +0000 UTC m=+20.354676012"
	Oct 09 19:38:22 no-preload-678119 kubelet[2028]: I1009 19:38:22.143512    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cfmf8" podStartSLOduration=18.143494586 podStartE2EDuration="18.143494586s" podCreationTimestamp="2025-10-09 19:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:38:19.739988514 +0000 UTC m=+20.379433516" watchObservedRunningTime="2025-10-09 19:38:22.143494586 +0000 UTC m=+22.782939539"
	Oct 09 19:38:22 no-preload-678119 kubelet[2028]: E1009 19:38:22.152866    2028 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-678119\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-678119' and this object" logger="UnhandledError" reflector="object-\"default\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 09 19:38:22 no-preload-678119 kubelet[2028]: E1009 19:38:22.152981    2028 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox\" is forbidden: User \"system:node:no-preload-678119\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-678119' and this object" podUID="5cf9de21-70e1-4070-8c67-80a49ebe678c" pod="default/busybox"
	Oct 09 19:38:22 no-preload-678119 kubelet[2028]: I1009 19:38:22.205680    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwck4\" (UniqueName: \"kubernetes.io/projected/5cf9de21-70e1-4070-8c67-80a49ebe678c-kube-api-access-kwck4\") pod \"busybox\" (UID: \"5cf9de21-70e1-4070-8c67-80a49ebe678c\") " pod="default/busybox"
	Oct 09 19:38:23 no-preload-678119 kubelet[2028]: E1009 19:38:23.316017    2028 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:38:23 no-preload-678119 kubelet[2028]: E1009 19:38:23.316067    2028 projected.go:196] Error preparing data for projected volume kube-api-access-kwck4 for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:38:23 no-preload-678119 kubelet[2028]: E1009 19:38:23.316156    2028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5cf9de21-70e1-4070-8c67-80a49ebe678c-kube-api-access-kwck4 podName:5cf9de21-70e1-4070-8c67-80a49ebe678c nodeName:}" failed. No retries permitted until 2025-10-09 19:38:23.816131953 +0000 UTC m=+24.455576906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kwck4" (UniqueName: "kubernetes.io/projected/5cf9de21-70e1-4070-8c67-80a49ebe678c-kube-api-access-kwck4") pod "busybox" (UID: "5cf9de21-70e1-4070-8c67-80a49ebe678c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:38:23 no-preload-678119 kubelet[2028]: W1009 19:38:23.976909    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/crio-a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73 WatchSource:0}: Error finding container a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73: Status 404 returned error can't find the container with id a1f6a7c2e049bad4b63af9ab50f0213fe85cec5975c5fb5803925d08e1ab8f73
	
	
	==> storage-provisioner [53bd81b08220a549a67d3054b374075ad200a753c286b0ea8b0588c546a0228e] <==
	I1009 19:38:19.049414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:38:19.052141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:19.064429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:38:19.064614       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:38:19.064804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-678119_be52c289-5d95-4066-91fe-0bbf3ab487de!
	I1009 19:38:19.075074       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56611be6-d733-49ae-861a-2846139fa527", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-678119_be52c289-5d95-4066-91fe-0bbf3ab487de became leader
	W1009 19:38:19.076517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:19.091262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:38:19.167059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-678119_be52c289-5d95-4066-91fe-0bbf3ab487de!
	W1009 19:38:21.094641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:21.100596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:23.103888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:23.112723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:25.116443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:25.123054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:27.127014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:27.132778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:29.136490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:29.144472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:31.149028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:31.158941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:33.162928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:33.180641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:35.190243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:38:35.201386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-678119 -n no-preload-678119
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-678119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-678119 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-678119 --alsologtostderr -v=1: exit status 80 (2.541621127s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-678119 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:40:02.677701  478075 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:40:02.677902  478075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:02.677933  478075 out.go:374] Setting ErrFile to fd 2...
	I1009 19:40:02.677952  478075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:02.678266  478075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:40:02.678574  478075 out.go:368] Setting JSON to false
	I1009 19:40:02.678627  478075 mustload.go:65] Loading cluster: no-preload-678119
	I1009 19:40:02.679172  478075 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:02.679771  478075 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:40:02.703858  478075 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:40:02.704209  478075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:02.775141  478075 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:40:02.763847926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:40:02.776133  478075 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-678119 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 19:40:02.779637  478075 out.go:179] * Pausing node no-preload-678119 ... 
	I1009 19:40:02.783379  478075 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:40:02.783729  478075 ssh_runner.go:195] Run: systemctl --version
	I1009 19:40:02.783783  478075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:40:02.801189  478075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:40:02.904881  478075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:40:02.918043  478075 pause.go:52] kubelet running: true
	I1009 19:40:02.918207  478075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:40:03.206192  478075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:40:03.206310  478075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:40:03.274397  478075 cri.go:89] found id: "4110f531550ee3bba9d39633377e98dc4d9b90cbabe48ed631473d7cc3c34d1f"
	I1009 19:40:03.274424  478075 cri.go:89] found id: "e5baf5832a360386e095d5e41a8324cfa2d11a9c88e8f2319bfc1252311ac7b4"
	I1009 19:40:03.274442  478075 cri.go:89] found id: "b244ecd4999c0ff9c2715a575dc3c0a4f78e1e61014e5148fa3a221fcd2c5d67"
	I1009 19:40:03.274446  478075 cri.go:89] found id: "5ec0cc7ddf9a258bdb958420cd6e2751c0d81f215b0fa96445c52c0fcc7c6d19"
	I1009 19:40:03.274449  478075 cri.go:89] found id: "3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513"
	I1009 19:40:03.274453  478075 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:40:03.274456  478075 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:40:03.274459  478075 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:40:03.274462  478075 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:40:03.274469  478075 cri.go:89] found id: "96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	I1009 19:40:03.274472  478075 cri.go:89] found id: "596a04caf4eb777f4ae839f82d27926841fdbc20c77d098530cd6c5998be36fc"
	I1009 19:40:03.274475  478075 cri.go:89] found id: ""
	I1009 19:40:03.274523  478075 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:40:03.297648  478075 retry.go:31] will retry after 257.622626ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:03Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:40:03.556220  478075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:40:03.570573  478075 pause.go:52] kubelet running: false
	I1009 19:40:03.570639  478075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:40:03.749862  478075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:40:03.749941  478075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:40:03.828130  478075 cri.go:89] found id: "4110f531550ee3bba9d39633377e98dc4d9b90cbabe48ed631473d7cc3c34d1f"
	I1009 19:40:03.828155  478075 cri.go:89] found id: "e5baf5832a360386e095d5e41a8324cfa2d11a9c88e8f2319bfc1252311ac7b4"
	I1009 19:40:03.828160  478075 cri.go:89] found id: "b244ecd4999c0ff9c2715a575dc3c0a4f78e1e61014e5148fa3a221fcd2c5d67"
	I1009 19:40:03.828165  478075 cri.go:89] found id: "5ec0cc7ddf9a258bdb958420cd6e2751c0d81f215b0fa96445c52c0fcc7c6d19"
	I1009 19:40:03.828168  478075 cri.go:89] found id: "3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513"
	I1009 19:40:03.828171  478075 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:40:03.828174  478075 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:40:03.828177  478075 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:40:03.828180  478075 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:40:03.828186  478075 cri.go:89] found id: "96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	I1009 19:40:03.828189  478075 cri.go:89] found id: "596a04caf4eb777f4ae839f82d27926841fdbc20c77d098530cd6c5998be36fc"
	I1009 19:40:03.828192  478075 cri.go:89] found id: ""
	I1009 19:40:03.828241  478075 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:40:03.839122  478075 retry.go:31] will retry after 390.588278ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:03Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:40:04.230786  478075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:40:04.245221  478075 pause.go:52] kubelet running: false
	I1009 19:40:04.245346  478075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:40:04.418827  478075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:40:04.418917  478075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:40:04.494188  478075 cri.go:89] found id: "4110f531550ee3bba9d39633377e98dc4d9b90cbabe48ed631473d7cc3c34d1f"
	I1009 19:40:04.494215  478075 cri.go:89] found id: "e5baf5832a360386e095d5e41a8324cfa2d11a9c88e8f2319bfc1252311ac7b4"
	I1009 19:40:04.494220  478075 cri.go:89] found id: "b244ecd4999c0ff9c2715a575dc3c0a4f78e1e61014e5148fa3a221fcd2c5d67"
	I1009 19:40:04.494224  478075 cri.go:89] found id: "5ec0cc7ddf9a258bdb958420cd6e2751c0d81f215b0fa96445c52c0fcc7c6d19"
	I1009 19:40:04.494227  478075 cri.go:89] found id: "3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513"
	I1009 19:40:04.494230  478075 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:40:04.494233  478075 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:40:04.494236  478075 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:40:04.494239  478075 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:40:04.494265  478075 cri.go:89] found id: "96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	I1009 19:40:04.494282  478075 cri.go:89] found id: "596a04caf4eb777f4ae839f82d27926841fdbc20c77d098530cd6c5998be36fc"
	I1009 19:40:04.494286  478075 cri.go:89] found id: ""
	I1009 19:40:04.494349  478075 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:40:04.506158  478075 retry.go:31] will retry after 329.110174ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:04Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:40:04.835750  478075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:40:04.848983  478075 pause.go:52] kubelet running: false
	I1009 19:40:04.849076  478075 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:40:05.020300  478075 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:40:05.020409  478075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:40:05.113097  478075 cri.go:89] found id: "4110f531550ee3bba9d39633377e98dc4d9b90cbabe48ed631473d7cc3c34d1f"
	I1009 19:40:05.113175  478075 cri.go:89] found id: "e5baf5832a360386e095d5e41a8324cfa2d11a9c88e8f2319bfc1252311ac7b4"
	I1009 19:40:05.113198  478075 cri.go:89] found id: "b244ecd4999c0ff9c2715a575dc3c0a4f78e1e61014e5148fa3a221fcd2c5d67"
	I1009 19:40:05.113216  478075 cri.go:89] found id: "5ec0cc7ddf9a258bdb958420cd6e2751c0d81f215b0fa96445c52c0fcc7c6d19"
	I1009 19:40:05.113254  478075 cri.go:89] found id: "3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513"
	I1009 19:40:05.113280  478075 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:40:05.113316  478075 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:40:05.113352  478075 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:40:05.113390  478075 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:40:05.113424  478075 cri.go:89] found id: "96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	I1009 19:40:05.113449  478075 cri.go:89] found id: "596a04caf4eb777f4ae839f82d27926841fdbc20c77d098530cd6c5998be36fc"
	I1009 19:40:05.113472  478075 cri.go:89] found id: ""
	I1009 19:40:05.113555  478075 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:40:05.129853  478075 out.go:203] 
	W1009 19:40:05.132737  478075 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:40:05.132762  478075 out.go:285] * 
	* 
	W1009 19:40:05.141020  478075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:40:05.145941  478075 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-678119 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-678119
helpers_test.go:243: (dbg) docker inspect no-preload-678119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198",
	        "Created": "2025-10-09T19:37:06.160258648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:49.210284712Z",
	            "FinishedAt": "2025-10-09T19:38:48.135487852Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/hosts",
	        "LogPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198-json.log",
	        "Name": "/no-preload-678119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-678119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-678119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198",
	                "LowerDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-678119",
	                "Source": "/var/lib/docker/volumes/no-preload-678119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-678119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-678119",
	                "name.minikube.sigs.k8s.io": "no-preload-678119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bded70007e05a3f6d6785d76afa636b82c3589a2b908dd781acdbf2680fa772c",
	            "SandboxKey": "/var/run/docker/netns/bded70007e05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-678119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:3f:65:8f:70:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5323b5d2b808ea7e86b28785565321bc6d429621f1f5c630eb2a054cf03b7389",
	                    "EndpointID": "0e2dc78d30912b69cc9507c134a03217db8e3039ec96216b5688ba86373acf05",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-678119",
	                        "2e3aac5c1c11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119: exit status 2 (353.138699ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-678119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-678119 logs -n 25: (1.373211277s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:33 UTC │ 09 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │ 09 Oct 25 19:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-271815 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:48.813211  475149 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:48.813414  475149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:48.813442  475149 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:48.813462  475149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:48.813744  475149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:38:48.814213  475149 out.go:368] Setting JSON to false
	I1009 19:38:48.815200  475149 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8480,"bootTime":1760030249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:38:48.815292  475149 start.go:141] virtualization:  
	I1009 19:38:48.818274  475149 out.go:179] * [no-preload-678119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:38:48.822174  475149 notify.go:220] Checking for updates...
	I1009 19:38:48.825926  475149 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:38:48.828915  475149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:48.831842  475149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:48.834784  475149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:38:48.837573  475149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:38:48.840448  475149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:48.843897  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:48.844471  475149 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:38:48.874806  475149 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:38:48.874928  475149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:48.987721  475149 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:38:48.978239172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:48.987828  475149 docker.go:318] overlay module found
	I1009 19:38:48.991227  475149 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:48.994112  475149 start.go:305] selected driver: docker
	I1009 19:38:48.994277  475149 start.go:925] validating driver "docker" against &{Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:48.994394  475149 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:48.995048  475149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:49.091619  475149 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:38:49.077776228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:49.091980  475149 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:49.092010  475149 cni.go:84] Creating CNI manager for ""
	I1009 19:38:49.092070  475149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:49.092114  475149 start.go:349] cluster config:
	{Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:49.095401  475149 out.go:179] * Starting "no-preload-678119" primary control-plane node in "no-preload-678119" cluster
	I1009 19:38:49.098259  475149 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:38:49.101090  475149 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:49.103853  475149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:49.104001  475149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/config.json ...
	I1009 19:38:49.104325  475149 cache.go:107] acquiring lock: {Name:mkf75ee142286ad1bdc0e9c0aa3f48e64fafdbe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104424  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 19:38:49.104438  475149 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.819µs
	I1009 19:38:49.104456  475149 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 19:38:49.104469  475149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:49.104663  475149 cache.go:107] acquiring lock: {Name:mk25f7c277db514655a4eee10ac8e6ce05f41968 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104735  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1009 19:38:49.104747  475149 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 90.06µs
	I1009 19:38:49.104755  475149 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1009 19:38:49.104767  475149 cache.go:107] acquiring lock: {Name:mkf23fc2fd145cfb44f93f7bd77348bc96e294c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104802  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1009 19:38:49.104812  475149 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 46.36µs
	I1009 19:38:49.104819  475149 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1009 19:38:49.104828  475149 cache.go:107] acquiring lock: {Name:mkf1b5cecee0ad7719ec268fb80d35042f8ea9ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104861  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1009 19:38:49.104870  475149 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.503µs
	I1009 19:38:49.104876  475149 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1009 19:38:49.104885  475149 cache.go:107] acquiring lock: {Name:mk6d2ee36782fdd52dfc3b1b6d6b824788680c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104911  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1009 19:38:49.104920  475149 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 36.333µs
	I1009 19:38:49.104927  475149 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1009 19:38:49.104937  475149 cache.go:107] acquiring lock: {Name:mkbd960140c8f1b68fbb8e3db795bee47fe958c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104967  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1009 19:38:49.104976  475149 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.796µs
	I1009 19:38:49.104989  475149 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1009 19:38:49.105002  475149 cache.go:107] acquiring lock: {Name:mkb6bcbed58f86de43d5846c736eec4c3f941cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.105034  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1009 19:38:49.105043  475149 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.593µs
	I1009 19:38:49.105105  475149 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1009 19:38:49.105128  475149 cache.go:107] acquiring lock: {Name:mkae0e70582a2b9e175be8a94ecf46f19839bead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.105176  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1009 19:38:49.105187  475149 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 61.49µs
	I1009 19:38:49.105204  475149 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1009 19:38:49.105211  475149 cache.go:87] Successfully saved all images to host disk.
	I1009 19:38:49.127767  475149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:49.127793  475149 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:49.127807  475149 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:38:49.127832  475149 start.go:360] acquireMachinesLock for no-preload-678119: {Name:mk55480b0ad862c0c372f2026083e24864004a2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.127889  475149 start.go:364] duration metric: took 37.367µs to acquireMachinesLock for "no-preload-678119"
	I1009 19:38:49.127913  475149 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:38:49.127918  475149 fix.go:54] fixHost starting: 
	I1009 19:38:49.128192  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:49.160203  475149 fix.go:112] recreateIfNeeded on no-preload-678119: state=Stopped err=<nil>
	W1009 19:38:49.160241  475149 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:38:47.090536  472674 out.go:252]   - Generating certificates and keys ...
	I1009 19:38:47.090649  472674 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:38:47.090723  472674 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:38:47.663776  472674 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:38:47.916921  472674 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:38:48.398750  472674 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:38:48.896575  472674 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:38:49.033784  472674 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:38:49.034339  472674 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-779570 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:38:49.163507  475149 out.go:252] * Restarting existing docker container for "no-preload-678119" ...
	I1009 19:38:49.163597  475149 cli_runner.go:164] Run: docker start no-preload-678119
	I1009 19:38:49.501880  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:49.533606  475149 kic.go:430] container "no-preload-678119" state is running.
	I1009 19:38:49.534010  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:49.558157  475149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/config.json ...
	I1009 19:38:49.558394  475149 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:49.558454  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:49.592483  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:49.592796  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:49.592814  475149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:49.593389  475149 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34622->127.0.0.1:33440: read: connection reset by peer
	I1009 19:38:52.746325  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-678119
	
	I1009 19:38:52.746353  475149 ubuntu.go:182] provisioning hostname "no-preload-678119"
	I1009 19:38:52.746422  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:52.771340  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:52.771712  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:52.771726  475149 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-678119 && echo "no-preload-678119" | sudo tee /etc/hostname
	I1009 19:38:52.958100  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-678119
	
	I1009 19:38:52.958350  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:52.987751  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:52.988125  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:52.988149  475149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-678119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-678119/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-678119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:53.155148  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:53.155225  475149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:38:53.155293  475149 ubuntu.go:190] setting up certificates
	I1009 19:38:53.155321  475149 provision.go:84] configureAuth start
	I1009 19:38:53.155442  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:53.180487  475149 provision.go:143] copyHostCerts
	I1009 19:38:53.180573  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:38:53.180583  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:38:53.180669  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:38:53.180767  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:38:53.180773  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:38:53.180801  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:38:53.180887  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:38:53.180892  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:38:53.180919  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:38:53.180968  475149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.no-preload-678119 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-678119]
	I1009 19:38:53.630178  475149 provision.go:177] copyRemoteCerts
	I1009 19:38:53.630299  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:53.630392  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:53.655020  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:53.759307  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:53.780546  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:38:53.803197  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:49.526581  472674 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:38:49.527749  472674 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-779570 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:38:50.123869  472674 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:38:50.546191  472674 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:38:51.177579  472674 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:38:51.181679  472674 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:38:51.741755  472674 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:38:52.608100  472674 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:38:52.946233  472674 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:38:53.250331  472674 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:38:54.122308  472674 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:38:54.128205  472674 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:38:54.131074  472674 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:38:54.134743  472674 out.go:252]   - Booting up control plane ...
	I1009 19:38:54.134867  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:38:54.135875  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:38:54.140792  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:38:54.158742  472674 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:38:54.158867  472674 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:38:54.167147  472674 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:38:54.167465  472674 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:38:54.167692  472674 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:38:53.831635  475149 provision.go:87] duration metric: took 676.267515ms to configureAuth
	I1009 19:38:53.831724  475149 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:53.831966  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:53.832133  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:53.851555  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:53.851868  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:53.851889  475149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:54.218941  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:54.218968  475149 machine.go:96] duration metric: took 4.660565151s to provisionDockerMachine
	I1009 19:38:54.218980  475149 start.go:293] postStartSetup for "no-preload-678119" (driver="docker")
	I1009 19:38:54.218991  475149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:54.219051  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:54.219114  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.253136  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.363253  475149 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:54.366741  475149 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:54.366779  475149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:54.366790  475149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:38:54.366851  475149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:38:54.366932  475149 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:38:54.367040  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:54.374917  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:38:54.394770  475149 start.go:296] duration metric: took 175.774144ms for postStartSetup
	I1009 19:38:54.394858  475149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:54.394902  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.419819  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.527507  475149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:54.532751  475149 fix.go:56] duration metric: took 5.404825073s for fixHost
	I1009 19:38:54.532777  475149 start.go:83] releasing machines lock for "no-preload-678119", held for 5.404874739s
	I1009 19:38:54.532866  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:54.551853  475149 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:54.551903  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.552171  475149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:54.552236  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.587571  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.591973  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.786169  475149 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:54.797207  475149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:54.844950  475149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:54.850420  475149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:54.850518  475149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:54.859947  475149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:54.859973  475149 start.go:495] detecting cgroup driver to use...
	I1009 19:38:54.860015  475149 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:38:54.860092  475149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:54.877167  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:54.891860  475149 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:54.891931  475149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:54.909386  475149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:54.929952  475149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:55.130327  475149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:55.323209  475149 docker.go:234] disabling docker service ...
	I1009 19:38:55.323343  475149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:55.346028  475149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:55.365298  475149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:55.539092  475149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:55.674765  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:55.689880  475149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:55.712183  475149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:55.712269  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.722072  475149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:38:55.722162  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.731622  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.740336  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.749099  475149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:55.756988  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.765950  475149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.774195  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.782943  475149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:55.790578  475149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:55.797880  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:55.926886  475149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:56.114635  475149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:56.114763  475149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:56.121096  475149 start.go:563] Will wait 60s for crictl version
	I1009 19:38:56.121207  475149 ssh_runner.go:195] Run: which crictl
	I1009 19:38:56.130479  475149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:56.170766  475149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:56.170913  475149 ssh_runner.go:195] Run: crio --version
	I1009 19:38:56.216318  475149 ssh_runner.go:195] Run: crio --version
	I1009 19:38:56.266871  475149 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:56.269143  475149 cli_runner.go:164] Run: docker network inspect no-preload-678119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:56.291545  475149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:56.295967  475149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:56.312988  475149 kubeadm.go:883] updating cluster {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:56.313106  475149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:56.313155  475149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:56.364876  475149 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:56.364904  475149 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:56.364912  475149 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:56.365004  475149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-678119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:56.365089  475149 ssh_runner.go:195] Run: crio config
	I1009 19:38:56.430355  475149 cni.go:84] Creating CNI manager for ""
	I1009 19:38:56.430382  475149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:56.430401  475149 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:56.430439  475149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-678119 NodeName:no-preload-678119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:56.430591  475149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-678119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:56.430676  475149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:56.439902  475149 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:38:56.439985  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:56.448388  475149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:38:56.462540  475149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:56.476519  475149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 19:38:56.491082  475149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:56.495043  475149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:56.505554  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:56.720133  475149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:56.748686  475149 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119 for IP: 192.168.76.2
	I1009 19:38:56.748703  475149 certs.go:195] generating shared ca certs ...
	I1009 19:38:56.748722  475149 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:56.748855  475149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:38:56.748902  475149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:38:56.748909  475149 certs.go:257] generating profile certs ...
	I1009 19:38:56.748985  475149 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key
	I1009 19:38:56.749043  475149 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7
	I1009 19:38:56.749079  475149 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key
	I1009 19:38:56.749184  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:38:56.749218  475149 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:56.749226  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:56.749249  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:56.749270  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:56.749290  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:56.749330  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:38:56.749922  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:56.793839  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:56.836661  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:56.881185  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:56.944481  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:38:57.004468  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:57.065273  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:57.111675  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:38:57.159516  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:38:57.193974  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:57.224679  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:38:57.249240  475149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:57.263896  475149 ssh_runner.go:195] Run: openssl version
	I1009 19:38:57.271087  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:38:57.281107  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.285858  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.285969  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.329453  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:57.338547  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:38:57.347911  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.352745  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.352867  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.395027  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:57.405079  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:57.415089  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.422244  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.422359  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.481668  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:57.490775  475149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:57.495538  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:57.549723  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:57.601360  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:57.692400  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:57.791426  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:58.054948  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:58.224168  475149 kubeadm.go:400] StartCluster: {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:58.224323  475149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:58.224429  475149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:58.295613  475149 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:38:58.295690  475149 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:38:58.295718  475149 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:38:58.295736  475149 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:38:58.295767  475149 cri.go:89] found id: ""
	I1009 19:38:58.295849  475149 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:38:58.315263  475149 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:58Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:38:58.315427  475149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:58.352496  475149 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:58.352572  475149 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:58.352663  475149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:58.364117  475149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:58.364656  475149 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-678119" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:58.364824  475149 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-678119" cluster setting kubeconfig missing "no-preload-678119" context setting]
	I1009 19:38:58.365175  475149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.366813  475149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:58.383869  475149 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 19:38:58.383950  475149 kubeadm.go:601] duration metric: took 31.357412ms to restartPrimaryControlPlane
	I1009 19:38:58.383987  475149 kubeadm.go:402] duration metric: took 159.827914ms to StartCluster
	I1009 19:38:58.384023  475149 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.384115  475149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:58.384839  475149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.385108  475149 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:58.385553  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:58.385514  475149 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:58.385729  475149 addons.go:69] Setting storage-provisioner=true in profile "no-preload-678119"
	I1009 19:38:58.385756  475149 addons.go:238] Setting addon storage-provisioner=true in "no-preload-678119"
	W1009 19:38:58.385789  475149 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:38:58.385831  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.386818  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.387032  475149 addons.go:69] Setting dashboard=true in profile "no-preload-678119"
	I1009 19:38:58.387076  475149 addons.go:238] Setting addon dashboard=true in "no-preload-678119"
	W1009 19:38:58.387121  475149 addons.go:247] addon dashboard should already be in state true
	I1009 19:38:58.387163  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.387373  475149 addons.go:69] Setting default-storageclass=true in profile "no-preload-678119"
	I1009 19:38:58.387399  475149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-678119"
	I1009 19:38:58.387676  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.387752  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.391628  475149 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:58.396249  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:58.438333  475149 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:38:58.441306  475149 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:38:58.444158  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:38:58.444184  475149 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:38:58.444265  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.448911  475149 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:58.451904  475149 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:58.451928  475149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:58.451993  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.463067  475149 addons.go:238] Setting addon default-storageclass=true in "no-preload-678119"
	W1009 19:38:58.463094  475149 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:38:58.463121  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.463526  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.498859  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:58.520867  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:58.523284  475149 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:58.523308  475149 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:58.523377  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.563152  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.330054  472674 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:38:54.330209  472674 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:38:55.832042  472674 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501826405s
	I1009 19:38:55.840261  472674 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:38:55.840363  472674 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:38:55.840711  472674 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:38:55.840800  472674 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:38:58.863878  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:58.891983  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:38:58.892055  475149 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:38:58.998644  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:59.001284  475149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:59.021274  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:38:59.021348  475149 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:38:59.188119  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:38:59.188192  475149 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:38:59.378216  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:38:59.378279  475149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:38:59.465321  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:38:59.465395  475149 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:38:59.514487  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:38:59.514563  475149 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:38:59.558617  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:38:59.558699  475149 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:38:59.588190  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:38:59.588279  475149 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:38:59.618512  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:38:59.618593  475149 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:38:59.643713  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:39:00.610608  472674 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.769277023s
	I1009 19:39:04.343086  472674 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.501851946s
	I1009 19:39:06.343526  472674 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502694801s
	I1009 19:39:06.369876  472674 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:39:06.387294  472674 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:39:06.405800  472674 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:39:06.406305  472674 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-779570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:39:06.423536  472674 kubeadm.go:318] [bootstrap-token] Using token: lmcsj0.9sm8uir04wanmzmq
	I1009 19:39:06.426543  472674 out.go:252]   - Configuring RBAC rules ...
	I1009 19:39:06.426674  472674 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:39:06.436263  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:39:06.447728  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:39:06.452470  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:39:06.460667  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:39:06.465978  472674 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:39:06.754038  472674 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:39:07.255177  472674 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:39:07.755329  472674 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:39:07.756969  472674 kubeadm.go:318] 
	I1009 19:39:07.757057  472674 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:39:07.757082  472674 kubeadm.go:318] 
	I1009 19:39:07.757168  472674 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:39:07.757178  472674 kubeadm.go:318] 
	I1009 19:39:07.757205  472674 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:39:07.757636  472674 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:39:07.757707  472674 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:39:07.757718  472674 kubeadm.go:318] 
	I1009 19:39:07.757775  472674 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:39:07.757784  472674 kubeadm.go:318] 
	I1009 19:39:07.757834  472674 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:39:07.757841  472674 kubeadm.go:318] 
	I1009 19:39:07.757895  472674 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:39:07.757978  472674 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:39:07.758053  472674 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:39:07.758062  472674 kubeadm.go:318] 
	I1009 19:39:07.758357  472674 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:39:07.758448  472674 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:39:07.758458  472674 kubeadm.go:318] 
	I1009 19:39:07.758721  472674 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lmcsj0.9sm8uir04wanmzmq \
	I1009 19:39:07.758838  472674 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:39:07.759027  472674 kubeadm.go:318] 	--control-plane 
	I1009 19:39:07.759041  472674 kubeadm.go:318] 
	I1009 19:39:07.759298  472674 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:39:07.759310  472674 kubeadm.go:318] 
	I1009 19:39:07.759586  472674 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lmcsj0.9sm8uir04wanmzmq \
	I1009 19:39:07.759866  472674 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:39:07.775985  472674 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:39:07.776271  472674 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:39:07.776415  472674 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:39:07.776431  472674 cni.go:84] Creating CNI manager for ""
	I1009 19:39:07.776440  472674 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:39:07.795000  472674 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:39:07.801486  472674 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:39:07.811908  472674 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:39:07.811932  472674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:39:07.836560  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:39:08.499348  472674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:39:08.499480  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:08.499543  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-779570 minikube.k8s.io/updated_at=2025_10_09T19_39_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=embed-certs-779570 minikube.k8s.io/primary=true
	I1009 19:39:08.888724  472674 ops.go:34] apiserver oom_adj: -16
	I1009 19:39:08.888844  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:09.147612  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.283704721s)
	I1009 19:39:09.147678  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.148965085s)
	I1009 19:39:09.147979  475149 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.146620107s)
	I1009 19:39:09.148013  475149 node_ready.go:35] waiting up to 6m0s for node "no-preload-678119" to be "Ready" ...
	I1009 19:39:09.211602  475149 node_ready.go:49] node "no-preload-678119" is "Ready"
	I1009 19:39:09.211632  475149 node_ready.go:38] duration metric: took 63.599366ms for node "no-preload-678119" to be "Ready" ...
	I1009 19:39:09.211646  475149 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:39:09.211706  475149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:39:09.337873  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.6940585s)
	I1009 19:39:09.337951  475149 api_server.go:72] duration metric: took 10.95271973s to wait for apiserver process to appear ...
	I1009 19:39:09.338027  475149 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:39:09.338046  475149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:39:09.341094  475149 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-678119 addons enable metrics-server
	
	I1009 19:39:09.343577  475149 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 19:39:09.389493  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:09.889054  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:10.389575  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:10.889661  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:11.389592  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:11.889704  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:12.032460  472674 kubeadm.go:1113] duration metric: took 3.533024214s to wait for elevateKubeSystemPrivileges
	I1009 19:39:12.032488  472674 kubeadm.go:402] duration metric: took 25.214702493s to StartCluster
	I1009 19:39:12.032523  472674 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:39:12.032587  472674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:39:12.034016  472674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:39:12.034541  472674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:39:12.034543  472674 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:39:12.034835  472674 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:39:12.034892  472674 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:39:12.034954  472674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-779570"
	I1009 19:39:12.034971  472674 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-779570"
	I1009 19:39:12.034995  472674 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:39:12.035486  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.036060  472674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-779570"
	I1009 19:39:12.036083  472674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-779570"
	I1009 19:39:12.036402  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.038798  472674 out.go:179] * Verifying Kubernetes components...
	I1009 19:39:12.042674  472674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:39:12.077716  472674 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:39:12.081562  472674 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:39:12.081588  472674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:39:12.081672  472674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:39:12.092287  472674 addons.go:238] Setting addon default-storageclass=true in "embed-certs-779570"
	I1009 19:39:12.092333  472674 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:39:12.092764  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.131640  472674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:39:12.139869  472674 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:39:12.139893  472674 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:39:12.140052  472674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:39:12.171162  472674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:39:12.453116  472674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:39:12.491524  472674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:39:12.491590  472674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:39:12.523534  472674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:39:13.550205  472674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.097009245s)
	I1009 19:39:13.550231  472674 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.058671599s)
	I1009 19:39:13.550270  472674 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.058665282s)
	I1009 19:39:13.550282  472674 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 19:39:13.551726  472674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:39:13.552388  472674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.028816669s)
	I1009 19:39:13.644696  472674 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:39:09.346473  475149 addons.go:514] duration metric: took 10.96095221s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 19:39:09.361646  475149 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:39:09.361682  475149 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:39:09.838189  475149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:39:09.846413  475149 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:39:09.847470  475149 api_server.go:141] control plane version: v1.34.1
	I1009 19:39:09.847496  475149 api_server.go:131] duration metric: took 509.460385ms to wait for apiserver health ...
	I1009 19:39:09.847507  475149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:39:09.851215  475149 system_pods.go:59] 8 kube-system pods found
	I1009 19:39:09.851256  475149 system_pods.go:61] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:09.851265  475149 system_pods.go:61] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:39:09.851302  475149 system_pods.go:61] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:39:09.851312  475149 system_pods.go:61] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:39:09.851324  475149 system_pods.go:61] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:39:09.851329  475149 system_pods.go:61] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:39:09.851336  475149 system_pods.go:61] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:39:09.851344  475149 system_pods.go:61] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:39:09.851351  475149 system_pods.go:74] duration metric: took 3.837394ms to wait for pod list to return data ...
	I1009 19:39:09.851379  475149 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:39:09.854041  475149 default_sa.go:45] found service account: "default"
	I1009 19:39:09.854065  475149 default_sa.go:55] duration metric: took 2.679038ms for default service account to be created ...
	I1009 19:39:09.854076  475149 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:39:09.856968  475149 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:09.857000  475149 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:09.857029  475149 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:39:09.857052  475149 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:39:09.857059  475149 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:39:09.857066  475149 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:39:09.857076  475149 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:39:09.857083  475149 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:39:09.857091  475149 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:39:09.857113  475149 system_pods.go:126] duration metric: took 3.029943ms to wait for k8s-apps to be running ...
	I1009 19:39:09.857129  475149 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:39:09.857202  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:39:09.872843  475149 system_svc.go:56] duration metric: took 15.704951ms WaitForService to wait for kubelet
	I1009 19:39:09.872874  475149 kubeadm.go:586] duration metric: took 11.487705051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:39:09.872892  475149 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:39:09.876212  475149 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:39:09.876243  475149 node_conditions.go:123] node cpu capacity is 2
	I1009 19:39:09.876257  475149 node_conditions.go:105] duration metric: took 3.358365ms to run NodePressure ...
	I1009 19:39:09.876269  475149 start.go:241] waiting for startup goroutines ...
	I1009 19:39:09.876277  475149 start.go:246] waiting for cluster config update ...
	I1009 19:39:09.876288  475149 start.go:255] writing updated cluster config ...
	I1009 19:39:09.876587  475149 ssh_runner.go:195] Run: rm -f paused
	I1009 19:39:09.880841  475149 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:09.884250  475149 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:39:11.889457  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	I1009 19:39:13.647509  472674 addons.go:514] duration metric: took 1.612597289s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:39:14.059255  472674 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-779570" context rescaled to 1 replicas
	W1009 19:39:13.890731  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:16.390611  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:15.557648  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:18.055169  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:18.891032  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:20.892125  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:23.394200  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:20.062765  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:22.555505  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:25.889428  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:27.893240  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:25.054817  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:27.054867  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:30.390232  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:32.890032  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:29.555075  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:32.055287  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:34.056012  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:34.890646  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:37.389399  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:36.554953  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:38.555125  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:39.396416  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:41.890687  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:41.055629  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:43.554918  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:44.389704  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:46.390038  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:45.555610  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:48.054785  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:48.889857  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	I1009 19:39:49.392625  475149 pod_ready.go:94] pod "coredns-66bc5c9577-cfmf8" is "Ready"
	I1009 19:39:49.392715  475149 pod_ready.go:86] duration metric: took 39.508438777s for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.396449  475149 pod_ready.go:83] waiting for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.402278  475149 pod_ready.go:94] pod "etcd-no-preload-678119" is "Ready"
	I1009 19:39:49.402304  475149 pod_ready.go:86] duration metric: took 5.826472ms for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.405956  475149 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.410555  475149 pod_ready.go:94] pod "kube-apiserver-no-preload-678119" is "Ready"
	I1009 19:39:49.410587  475149 pod_ready.go:86] duration metric: took 4.602417ms for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.412948  475149 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.588332  475149 pod_ready.go:94] pod "kube-controller-manager-no-preload-678119" is "Ready"
	I1009 19:39:49.588360  475149 pod_ready.go:86] duration metric: took 175.386297ms for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.788632  475149 pod_ready.go:83] waiting for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.189292  475149 pod_ready.go:94] pod "kube-proxy-cf6gt" is "Ready"
	I1009 19:39:50.189321  475149 pod_ready.go:86] duration metric: took 400.662881ms for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.388545  475149 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.788692  475149 pod_ready.go:94] pod "kube-scheduler-no-preload-678119" is "Ready"
	I1009 19:39:50.788721  475149 pod_ready.go:86] duration metric: took 400.15168ms for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.788735  475149 pod_ready.go:40] duration metric: took 40.907858692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:50.844656  475149 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:39:50.847818  475149 out.go:179] * Done! kubectl is now configured to use "no-preload-678119" cluster and "default" namespace by default
	W1009 19:39:50.054842  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:52.055250  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:54.555262  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	I1009 19:39:55.556813  472674 node_ready.go:49] node "embed-certs-779570" is "Ready"
	I1009 19:39:55.556840  472674 node_ready.go:38] duration metric: took 42.005077378s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:39:55.556854  472674 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:39:55.556916  472674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:39:55.571118  472674 api_server.go:72] duration metric: took 43.536495654s to wait for apiserver process to appear ...
	I1009 19:39:55.571141  472674 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:39:55.571160  472674 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:39:55.581899  472674 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:39:55.583069  472674 api_server.go:141] control plane version: v1.34.1
	I1009 19:39:55.583099  472674 api_server.go:131] duration metric: took 11.951146ms to wait for apiserver health ...
	I1009 19:39:55.583110  472674 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:39:55.586206  472674 system_pods.go:59] 8 kube-system pods found
	I1009 19:39:55.586242  472674 system_pods.go:61] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.586250  472674 system_pods.go:61] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.586257  472674 system_pods.go:61] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.586262  472674 system_pods.go:61] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.586267  472674 system_pods.go:61] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.586273  472674 system_pods.go:61] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.586277  472674 system_pods.go:61] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.586284  472674 system_pods.go:61] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.586299  472674 system_pods.go:74] duration metric: took 3.182256ms to wait for pod list to return data ...
	I1009 19:39:55.586309  472674 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:39:55.589128  472674 default_sa.go:45] found service account: "default"
	I1009 19:39:55.589156  472674 default_sa.go:55] duration metric: took 2.840943ms for default service account to be created ...
	I1009 19:39:55.589166  472674 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:39:55.593610  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:55.593642  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.593648  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.593655  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.593659  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.593664  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.593668  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.593673  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.593679  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.593704  472674 retry.go:31] will retry after 245.493217ms: missing components: kube-dns
	I1009 19:39:55.844658  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:55.844692  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.844699  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.844722  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.844727  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.844732  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.844736  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.844740  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.844746  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.844761  472674 retry.go:31] will retry after 270.704249ms: missing components: kube-dns
	I1009 19:39:56.120386  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:56.120421  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:56.120428  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:56.120434  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:56.120439  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:56.120445  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:56.120449  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:56.120453  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:56.120459  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:56.120497  472674 retry.go:31] will retry after 482.359976ms: missing components: kube-dns
	I1009 19:39:56.606422  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:56.606457  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:56.606465  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:56.606471  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:56.606475  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:56.606480  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:56.606484  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:56.606489  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:56.606495  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:56.606514  472674 retry.go:31] will retry after 538.519972ms: missing components: kube-dns
	I1009 19:39:57.150098  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:57.150205  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running
	I1009 19:39:57.150219  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:57.150225  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:57.150232  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:57.150242  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:57.150247  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:57.150251  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:57.150255  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:39:57.150266  472674 system_pods.go:126] duration metric: took 1.56109474s to wait for k8s-apps to be running ...
	I1009 19:39:57.150279  472674 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:39:57.150332  472674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:39:57.168317  472674 system_svc.go:56] duration metric: took 18.028148ms WaitForService to wait for kubelet
	I1009 19:39:57.168348  472674 kubeadm.go:586] duration metric: took 45.133730211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:39:57.168367  472674 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:39:57.171899  472674 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:39:57.171942  472674 node_conditions.go:123] node cpu capacity is 2
	I1009 19:39:57.171957  472674 node_conditions.go:105] duration metric: took 3.584132ms to run NodePressure ...
	I1009 19:39:57.171969  472674 start.go:241] waiting for startup goroutines ...
	I1009 19:39:57.171977  472674 start.go:246] waiting for cluster config update ...
	I1009 19:39:57.171990  472674 start.go:255] writing updated cluster config ...
	I1009 19:39:57.172347  472674 ssh_runner.go:195] Run: rm -f paused
	I1009 19:39:57.177547  472674 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:57.182493  472674 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.196579  472674 pod_ready.go:94] pod "coredns-66bc5c9577-4c9xb" is "Ready"
	I1009 19:39:57.196608  472674 pod_ready.go:86] duration metric: took 14.085128ms for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.199978  472674 pod_ready.go:83] waiting for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.208562  472674 pod_ready.go:94] pod "etcd-embed-certs-779570" is "Ready"
	I1009 19:39:57.208600  472674 pod_ready.go:86] duration metric: took 8.594816ms for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.211279  472674 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.217066  472674 pod_ready.go:94] pod "kube-apiserver-embed-certs-779570" is "Ready"
	I1009 19:39:57.217103  472674 pod_ready.go:86] duration metric: took 5.798296ms for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.219580  472674 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.582171  472674 pod_ready.go:94] pod "kube-controller-manager-embed-certs-779570" is "Ready"
	I1009 19:39:57.582252  472674 pod_ready.go:86] duration metric: took 362.649708ms for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.782320  472674 pod_ready.go:83] waiting for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.181328  472674 pod_ready.go:94] pod "kube-proxy-sp4bk" is "Ready"
	I1009 19:39:58.181359  472674 pod_ready.go:86] duration metric: took 399.01215ms for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.381832  472674 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.781804  472674 pod_ready.go:94] pod "kube-scheduler-embed-certs-779570" is "Ready"
	I1009 19:39:58.781835  472674 pod_ready.go:86] duration metric: took 399.975272ms for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.781847  472674 pod_ready.go:40] duration metric: took 1.604264696s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:58.836096  472674 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:39:58.841462  472674 out.go:179] * Done! kubectl is now configured to use "embed-certs-779570" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.362975997Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.366264412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.366301163Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.36632573Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.369419944Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.369456942Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.369532151Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.372435766Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.372469654Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.372495229Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.37566192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.375697375Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.174820772Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e2b8df12-ae1d-4522-9660-769a17ccb93c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.187309778Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=24ceca1a-e4d4-4445-91e6-110e22ee5aa5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.193784586Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper" id=9ab5ecbf-fe6d-43d9-b8be-d8e25d7ca0f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.194072015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.203366909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.204034018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.224794703Z" level=info msg="Created container 96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper" id=9ab5ecbf-fe6d-43d9-b8be-d8e25d7ca0f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.228045038Z" level=info msg="Starting container: 96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521" id=71ad50b5-156d-4651-b7f4-bffbb947bbb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.231452175Z" level=info msg="Started container" PID=1715 containerID=96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper id=71ad50b5-156d-4651-b7f4-bffbb947bbb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3cd054d4e49dd57196b06059733d2b460bc079f3b40436a2b37c6ca8feeefe0
	Oct 09 19:39:57 no-preload-678119 conmon[1713]: conmon 96f3bd2101071843a9c2 <ninfo>: container 1715 exited with status 1
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.540504763Z" level=info msg="Removing container: eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1" id=014bacf8-e9e3-489e-81ef-d0d7d7b83d40 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.54805658Z" level=info msg="Error loading conmon cgroup of container eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1: cgroup deleted" id=014bacf8-e9e3-489e-81ef-d0d7d7b83d40 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.553878539Z" level=info msg="Removed container eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper" id=014bacf8-e9e3-489e-81ef-d0d7d7b83d40 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	96f3bd2101071       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   a3cd054d4e49d       dashboard-metrics-scraper-6ffb444bf9-96r8v   kubernetes-dashboard
	4110f531550ee       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           26 seconds ago       Running             storage-provisioner         2                   10349560f3977       storage-provisioner                          kube-system
	596a04caf4eb7       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   4e4b5a451505b       kubernetes-dashboard-855c9754f9-6zf28        kubernetes-dashboard
	e5baf5832a360       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   607e94b520909       coredns-66bc5c9577-cfmf8                     kube-system
	4f359dfcc7a8f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   1914904cb7c3d       busybox                                      default
	b244ecd4999c0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   af03f47df2cca       kindnet-rg6kc                                kube-system
	5ec0cc7ddf9a2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   f0c8e30b405de       kube-proxy-cf6gt                             kube-system
	3d1a1090c5752       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   10349560f3977       storage-provisioner                          kube-system
	776d5d2a3dd24       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   84263d0de6094       kube-controller-manager-no-preload-678119    kube-system
	dffc78179e9e6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0920abad12200       kube-scheduler-no-preload-678119             kube-system
	3c637faddd2e8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b2e59a413c49a       kube-apiserver-no-preload-678119             kube-system
	e31f57ead2454       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   363cd8e1a22f5       etcd-no-preload-678119                       kube-system
	
	
	==> coredns [e5baf5832a360386e095d5e41a8324cfa2d11a9c88e8f2319bfc1252311ac7b4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42474 - 14178 "HINFO IN 7820871439522576481.5862110629079179430. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023368941s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-678119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-678119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=no-preload-678119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_38_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-678119
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:39:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:38:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-678119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b75a7a9ef584b398f5fa81ed5aad07c
	  System UUID:                b33fed70-8b70-482e-bac9-78dc101bc1cd
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-cfmf8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-no-preload-678119                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-rg6kc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-no-preload-678119              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-no-preload-678119     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-cf6gt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-no-preload-678119              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-96r8v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6zf28         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m7s                   kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m7s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s                   kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m7s                   kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m3s                   node-controller  Node no-preload-678119 event: Registered Node no-preload-678119 in Controller
	  Normal   NodeReady                108s                   kubelet          Node no-preload-678119 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)      kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)      kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)      kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node no-preload-678119 event: Registered Node no-preload-678119 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e] <==
	{"level":"warn","ts":"2025-10-09T19:39:02.831740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.883389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.949843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.970476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.090485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.115234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.160291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.191269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.243126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.284793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.342231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.390306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.463844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.518195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.549227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.603242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.701102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.722810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.784044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.822944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.958760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.002672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.052997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.108273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.251983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36612","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:40:06 up  2:22,  0 user,  load average: 2.92, 2.85, 2.30
	Linux no-preload-678119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b244ecd4999c0ff9c2715a575dc3c0a4f78e1e61014e5148fa3a221fcd2c5d67] <==
	I1009 19:39:08.070786       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:39:08.089213       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 19:39:08.089586       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:39:08.089603       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:39:08.089619       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:39:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:39:08.353473       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:39:08.353492       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:39:08.353501       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:39:08.353789       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:39:38.354038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:39:38.354054       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:39:38.354198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:39:38.354250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 19:39:39.653688       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:39:39.653801       1 metrics.go:72] Registering metrics
	I1009 19:39:39.653891       1 controller.go:711] "Syncing nftables rules"
	I1009 19:39:48.354288       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:48.354413       1 main.go:301] handling current node
	I1009 19:39:58.352672       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:58.352783       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6] <==
	I1009 19:39:06.252474       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:39:06.267351       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:39:06.267589       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:39:06.273985       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1009 19:39:06.276088       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:39:06.286051       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:39:06.286107       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:39:06.310888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:39:06.330401       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:39:06.330670       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 19:39:06.330697       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:39:06.330745       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:39:06.331054       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:39:06.341108       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:39:07.053996       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:39:07.097892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:39:08.490060       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:39:08.834917       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:39:08.988539       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:39:09.063686       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:39:09.301888       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.59.16"}
	I1009 19:39:09.329293       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.246.190"}
	I1009 19:39:11.967704       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:39:12.123282       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:39:12.172961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3] <==
	I1009 19:39:11.753627       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:39:11.759109       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:39:11.759341       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:39:11.759395       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:39:11.759646       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:39:11.759707       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:39:11.763029       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:39:11.778431       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:39:11.780779       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 19:39:11.781969       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:39:11.782398       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 19:39:11.782478       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 19:39:11.782526       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 19:39:11.782592       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 19:39:11.782621       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 19:39:11.785709       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:39:11.785788       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:39:11.785938       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:39:11.786030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-678119"
	I1009 19:39:11.786555       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:39:11.803694       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:39:11.803994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:39:11.809398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:39:11.809430       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:39:11.809437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5ec0cc7ddf9a258bdb958420cd6e2751c0d81f215b0fa96445c52c0fcc7c6d19] <==
	I1009 19:39:09.533409       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:39:09.635849       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:39:09.741469       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:39:09.741581       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 19:39:09.741683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:39:09.770984       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:39:09.771121       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:39:09.774826       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:39:09.775187       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:39:09.775396       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:39:09.777055       1 config.go:200] "Starting service config controller"
	I1009 19:39:09.777112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:39:09.777173       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:39:09.777200       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:39:09.777234       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:39:09.777259       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:39:09.778034       1 config.go:309] "Starting node config controller"
	I1009 19:39:09.778085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:39:09.778115       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:39:09.878508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:39:09.878598       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:39:09.878621       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b] <==
	I1009 19:39:05.166621       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:39:09.708064       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:39:09.708100       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:39:09.713596       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:39:09.713685       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:39:09.713711       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:39:09.713753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:39:09.714598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:39:09.714622       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:39:09.714639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:39:09.714645       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:39:09.813802       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:39:09.815479       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:39:09.815532       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:39:18 no-preload-678119 kubelet[773]: I1009 19:39:18.409851     773 scope.go:117] "RemoveContainer" containerID="adc63c6b7d741e69714ffde7280b5f9411d06368984b8fb920fe8ba699135f21"
	Oct 09 19:39:19 no-preload-678119 kubelet[773]: I1009 19:39:19.416032     773 scope.go:117] "RemoveContainer" containerID="adc63c6b7d741e69714ffde7280b5f9411d06368984b8fb920fe8ba699135f21"
	Oct 09 19:39:19 no-preload-678119 kubelet[773]: I1009 19:39:19.416431     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:19 no-preload-678119 kubelet[773]: E1009 19:39:19.416574     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:20 no-preload-678119 kubelet[773]: I1009 19:39:20.427496     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:20 no-preload-678119 kubelet[773]: E1009 19:39:20.430924     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:22 no-preload-678119 kubelet[773]: I1009 19:39:22.624451     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:22 no-preload-678119 kubelet[773]: E1009 19:39:22.624631     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.172966     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.469728     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.470031     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: E1009 19:39:33.470229     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.496974     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6zf28" podStartSLOduration=11.657728668 podStartE2EDuration="21.496957233s" podCreationTimestamp="2025-10-09 19:39:12 +0000 UTC" firstStartedPulling="2025-10-09 19:39:12.730154041 +0000 UTC m=+15.988356694" lastFinishedPulling="2025-10-09 19:39:22.569382606 +0000 UTC m=+25.827585259" observedRunningTime="2025-10-09 19:39:23.457164366 +0000 UTC m=+26.715367036" watchObservedRunningTime="2025-10-09 19:39:33.496957233 +0000 UTC m=+36.755159886"
	Oct 09 19:39:39 no-preload-678119 kubelet[773]: I1009 19:39:39.489416     773 scope.go:117] "RemoveContainer" containerID="3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513"
	Oct 09 19:39:42 no-preload-678119 kubelet[773]: I1009 19:39:42.624875     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:42 no-preload-678119 kubelet[773]: E1009 19:39:42.625496     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: I1009 19:39:57.172524     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: I1009 19:39:57.538610     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: I1009 19:39:57.538938     773 scope.go:117] "RemoveContainer" containerID="96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: E1009 19:39:57.539091     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:40:02 no-preload-678119 kubelet[773]: I1009 19:40:02.626692     773 scope.go:117] "RemoveContainer" containerID="96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	Oct 09 19:40:02 no-preload-678119 kubelet[773]: E1009 19:40:02.626890     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:40:03 no-preload-678119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:40:03 no-preload-678119 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:40:03 no-preload-678119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [596a04caf4eb777f4ae839f82d27926841fdbc20c77d098530cd6c5998be36fc] <==
	2025/10/09 19:39:22 Using namespace: kubernetes-dashboard
	2025/10/09 19:39:22 Using in-cluster config to connect to apiserver
	2025/10/09 19:39:22 Using secret token for csrf signing
	2025/10/09 19:39:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:39:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:39:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 19:39:22 Generating JWE encryption key
	2025/10/09 19:39:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:39:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:39:23 Initializing JWE encryption key from synchronized object
	2025/10/09 19:39:23 Creating in-cluster Sidecar client
	2025/10/09 19:39:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:39:23 Serving insecurely on HTTP port: 9090
	2025/10/09 19:39:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:39:22 Starting overwatch
	
	
	==> storage-provisioner [3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513] <==
	I1009 19:39:08.478397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:39:38.481029       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4110f531550ee3bba9d39633377e98dc4d9b90cbabe48ed631473d7cc3c34d1f] <==
	I1009 19:39:39.550666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:39:39.564129       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:39:39.564187       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:39:39.567596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:43.025734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:47.286335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:50.884801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:53.938781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.960670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.967535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:39:56.967902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:39:56.968165       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-678119_c912bbb6-7927-4397-ba84-03ab1213976e!
	I1009 19:39:56.969061       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56611be6-d733-49ae-861a-2846139fa527", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-678119_c912bbb6-7927-4397-ba84-03ab1213976e became leader
	W1009 19:39:56.977206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.988102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:39:57.068426       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-678119_c912bbb6-7927-4397-ba84-03ab1213976e!
	W1009 19:39:58.991554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:59.000003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:01.014517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:01.027089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:03.030486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:03.037612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:05.048041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:05.053915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-678119 -n no-preload-678119
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-678119 -n no-preload-678119: exit status 2 (495.288939ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-678119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-678119
helpers_test.go:243: (dbg) docker inspect no-preload-678119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198",
	        "Created": "2025-10-09T19:37:06.160258648Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:49.210284712Z",
	            "FinishedAt": "2025-10-09T19:38:48.135487852Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/hosts",
	        "LogPath": "/var/lib/docker/containers/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198/2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198-json.log",
	        "Name": "/no-preload-678119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-678119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-678119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e3aac5c1c114e189172b3cfcb638504713075266bf7fc1d93328017849e1198",
	                "LowerDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ec6de0add5b64658bbaff80a35f76753848bdd6e4a004685f9037890d7fb4ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-678119",
	                "Source": "/var/lib/docker/volumes/no-preload-678119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-678119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-678119",
	                "name.minikube.sigs.k8s.io": "no-preload-678119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bded70007e05a3f6d6785d76afa636b82c3589a2b908dd781acdbf2680fa772c",
	            "SandboxKey": "/var/run/docker/netns/bded70007e05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-678119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:3f:65:8f:70:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5323b5d2b808ea7e86b28785565321bc6d429621f1f5c630eb2a054cf03b7389",
	                    "EndpointID": "0e2dc78d30912b69cc9507c134a03217db8e3039ec96216b5688ba86373acf05",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-678119",
	                        "2e3aac5c1c11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119: exit status 2 (554.349954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-678119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-678119 logs -n 25: (1.699895262s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │ 09 Oct 25 19:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-271815 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570       │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:48.813211  475149 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:48.813414  475149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:48.813442  475149 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:48.813462  475149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:48.813744  475149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:38:48.814213  475149 out.go:368] Setting JSON to false
	I1009 19:38:48.815200  475149 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8480,"bootTime":1760030249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:38:48.815292  475149 start.go:141] virtualization:  
	I1009 19:38:48.818274  475149 out.go:179] * [no-preload-678119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:38:48.822174  475149 notify.go:220] Checking for updates...
	I1009 19:38:48.825926  475149 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:38:48.828915  475149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:48.831842  475149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:48.834784  475149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:38:48.837573  475149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:38:48.840448  475149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:48.843897  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:48.844471  475149 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:38:48.874806  475149 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:38:48.874928  475149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:48.987721  475149 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:38:48.978239172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:48.987828  475149 docker.go:318] overlay module found
	I1009 19:38:48.991227  475149 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:48.994112  475149 start.go:305] selected driver: docker
	I1009 19:38:48.994277  475149 start.go:925] validating driver "docker" against &{Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:48.994394  475149 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:48.995048  475149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:49.091619  475149 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:38:49.077776228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:49.091980  475149 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:49.092010  475149 cni.go:84] Creating CNI manager for ""
	I1009 19:38:49.092070  475149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:49.092114  475149 start.go:349] cluster config:
	{Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:49.095401  475149 out.go:179] * Starting "no-preload-678119" primary control-plane node in "no-preload-678119" cluster
	I1009 19:38:49.098259  475149 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:38:49.101090  475149 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:49.103853  475149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:49.104001  475149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/config.json ...
	I1009 19:38:49.104325  475149 cache.go:107] acquiring lock: {Name:mkf75ee142286ad1bdc0e9c0aa3f48e64fafdbe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104424  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 19:38:49.104438  475149 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.819µs
	I1009 19:38:49.104456  475149 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 19:38:49.104469  475149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:49.104663  475149 cache.go:107] acquiring lock: {Name:mk25f7c277db514655a4eee10ac8e6ce05f41968 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104735  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1009 19:38:49.104747  475149 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 90.06µs
	I1009 19:38:49.104755  475149 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1009 19:38:49.104767  475149 cache.go:107] acquiring lock: {Name:mkf23fc2fd145cfb44f93f7bd77348bc96e294c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104802  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1009 19:38:49.104812  475149 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 46.36µs
	I1009 19:38:49.104819  475149 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1009 19:38:49.104828  475149 cache.go:107] acquiring lock: {Name:mkf1b5cecee0ad7719ec268fb80d35042f8ea9ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104861  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1009 19:38:49.104870  475149 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.503µs
	I1009 19:38:49.104876  475149 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1009 19:38:49.104885  475149 cache.go:107] acquiring lock: {Name:mk6d2ee36782fdd52dfc3b1b6d6b824788680c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104911  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1009 19:38:49.104920  475149 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 36.333µs
	I1009 19:38:49.104927  475149 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1009 19:38:49.104937  475149 cache.go:107] acquiring lock: {Name:mkbd960140c8f1b68fbb8e3db795bee47fe958c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104967  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1009 19:38:49.104976  475149 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.796µs
	I1009 19:38:49.104989  475149 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1009 19:38:49.105002  475149 cache.go:107] acquiring lock: {Name:mkb6bcbed58f86de43d5846c736eec4c3f941cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.105034  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1009 19:38:49.105043  475149 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.593µs
	I1009 19:38:49.105105  475149 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1009 19:38:49.105128  475149 cache.go:107] acquiring lock: {Name:mkae0e70582a2b9e175be8a94ecf46f19839bead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.105176  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1009 19:38:49.105187  475149 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 61.49µs
	I1009 19:38:49.105204  475149 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1009 19:38:49.105211  475149 cache.go:87] Successfully saved all images to host disk.
	I1009 19:38:49.127767  475149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:49.127793  475149 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:49.127807  475149 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:38:49.127832  475149 start.go:360] acquireMachinesLock for no-preload-678119: {Name:mk55480b0ad862c0c372f2026083e24864004a2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.127889  475149 start.go:364] duration metric: took 37.367µs to acquireMachinesLock for "no-preload-678119"
	I1009 19:38:49.127913  475149 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:38:49.127918  475149 fix.go:54] fixHost starting: 
	I1009 19:38:49.128192  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:49.160203  475149 fix.go:112] recreateIfNeeded on no-preload-678119: state=Stopped err=<nil>
	W1009 19:38:49.160241  475149 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:38:47.090536  472674 out.go:252]   - Generating certificates and keys ...
	I1009 19:38:47.090649  472674 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:38:47.090723  472674 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:38:47.663776  472674 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:38:47.916921  472674 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:38:48.398750  472674 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:38:48.896575  472674 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:38:49.033784  472674 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:38:49.034339  472674 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-779570 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:38:49.163507  475149 out.go:252] * Restarting existing docker container for "no-preload-678119" ...
	I1009 19:38:49.163597  475149 cli_runner.go:164] Run: docker start no-preload-678119
	I1009 19:38:49.501880  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:49.533606  475149 kic.go:430] container "no-preload-678119" state is running.
	I1009 19:38:49.534010  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:49.558157  475149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/config.json ...
	I1009 19:38:49.558394  475149 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:49.558454  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:49.592483  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:49.592796  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:49.592814  475149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:49.593389  475149 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34622->127.0.0.1:33440: read: connection reset by peer
	I1009 19:38:52.746325  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-678119
	
	I1009 19:38:52.746353  475149 ubuntu.go:182] provisioning hostname "no-preload-678119"
	I1009 19:38:52.746422  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:52.771340  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:52.771712  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:52.771726  475149 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-678119 && echo "no-preload-678119" | sudo tee /etc/hostname
	I1009 19:38:52.958100  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-678119
	
	I1009 19:38:52.958350  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:52.987751  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:52.988125  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:52.988149  475149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-678119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-678119/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-678119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:53.155148  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:53.155225  475149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:38:53.155293  475149 ubuntu.go:190] setting up certificates
	I1009 19:38:53.155321  475149 provision.go:84] configureAuth start
	I1009 19:38:53.155442  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:53.180487  475149 provision.go:143] copyHostCerts
	I1009 19:38:53.180573  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:38:53.180583  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:38:53.180669  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:38:53.180767  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:38:53.180773  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:38:53.180801  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:38:53.180887  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:38:53.180892  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:38:53.180919  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:38:53.180968  475149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.no-preload-678119 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-678119]
	I1009 19:38:53.630178  475149 provision.go:177] copyRemoteCerts
	I1009 19:38:53.630299  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:53.630392  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:53.655020  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:53.759307  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:53.780546  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:38:53.803197  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:49.526581  472674 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:38:49.527749  472674 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-779570 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:38:50.123869  472674 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:38:50.546191  472674 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:38:51.177579  472674 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:38:51.181679  472674 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:38:51.741755  472674 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:38:52.608100  472674 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:38:52.946233  472674 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:38:53.250331  472674 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:38:54.122308  472674 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:38:54.128205  472674 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:38:54.131074  472674 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:38:54.134743  472674 out.go:252]   - Booting up control plane ...
	I1009 19:38:54.134867  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:38:54.135875  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:38:54.140792  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:38:54.158742  472674 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:38:54.158867  472674 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:38:54.167147  472674 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:38:54.167465  472674 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:38:54.167692  472674 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:38:53.831635  475149 provision.go:87] duration metric: took 676.267515ms to configureAuth
	I1009 19:38:53.831724  475149 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:53.831966  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:53.832133  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:53.851555  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:53.851868  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:53.851889  475149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:54.218941  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:54.218968  475149 machine.go:96] duration metric: took 4.660565151s to provisionDockerMachine
	I1009 19:38:54.218980  475149 start.go:293] postStartSetup for "no-preload-678119" (driver="docker")
	I1009 19:38:54.218991  475149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:54.219051  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:54.219114  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.253136  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.363253  475149 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:54.366741  475149 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:54.366779  475149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:54.366790  475149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:38:54.366851  475149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:38:54.366932  475149 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:38:54.367040  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:54.374917  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:38:54.394770  475149 start.go:296] duration metric: took 175.774144ms for postStartSetup
	I1009 19:38:54.394858  475149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:54.394902  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.419819  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.527507  475149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:54.532751  475149 fix.go:56] duration metric: took 5.404825073s for fixHost
	I1009 19:38:54.532777  475149 start.go:83] releasing machines lock for "no-preload-678119", held for 5.404874739s
	I1009 19:38:54.532866  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:54.551853  475149 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:54.551903  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.552171  475149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:54.552236  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.587571  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.591973  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.786169  475149 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:54.797207  475149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:54.844950  475149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:54.850420  475149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:54.850518  475149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:54.859947  475149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:54.859973  475149 start.go:495] detecting cgroup driver to use...
	I1009 19:38:54.860015  475149 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:38:54.860092  475149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:54.877167  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:54.891860  475149 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:54.891931  475149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:54.909386  475149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:54.929952  475149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:55.130327  475149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:55.323209  475149 docker.go:234] disabling docker service ...
	I1009 19:38:55.323343  475149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:55.346028  475149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:55.365298  475149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:55.539092  475149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:55.674765  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:55.689880  475149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:55.712183  475149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:55.712269  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.722072  475149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:38:55.722162  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.731622  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.740336  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.749099  475149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:55.756988  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.765950  475149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.774195  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.782943  475149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:55.790578  475149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:55.797880  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:55.926886  475149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:56.114635  475149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:56.114763  475149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:56.121096  475149 start.go:563] Will wait 60s for crictl version
	I1009 19:38:56.121207  475149 ssh_runner.go:195] Run: which crictl
	I1009 19:38:56.130479  475149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:56.170766  475149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:56.170913  475149 ssh_runner.go:195] Run: crio --version
	I1009 19:38:56.216318  475149 ssh_runner.go:195] Run: crio --version
	I1009 19:38:56.266871  475149 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:56.269143  475149 cli_runner.go:164] Run: docker network inspect no-preload-678119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:56.291545  475149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:56.295967  475149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:56.312988  475149 kubeadm.go:883] updating cluster {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:56.313106  475149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:56.313155  475149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:56.364876  475149 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:56.364904  475149 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:56.364912  475149 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:56.365004  475149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-678119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:56.365089  475149 ssh_runner.go:195] Run: crio config
	I1009 19:38:56.430355  475149 cni.go:84] Creating CNI manager for ""
	I1009 19:38:56.430382  475149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:56.430401  475149 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:56.430439  475149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-678119 NodeName:no-preload-678119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:56.430591  475149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-678119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:56.430676  475149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:56.439902  475149 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:38:56.439985  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:56.448388  475149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:38:56.462540  475149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:56.476519  475149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 19:38:56.491082  475149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:56.495043  475149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:56.505554  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:56.720133  475149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:56.748686  475149 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119 for IP: 192.168.76.2
	I1009 19:38:56.748703  475149 certs.go:195] generating shared ca certs ...
	I1009 19:38:56.748722  475149 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:56.748855  475149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:38:56.748902  475149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:38:56.748909  475149 certs.go:257] generating profile certs ...
	I1009 19:38:56.748985  475149 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key
	I1009 19:38:56.749043  475149 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7
	I1009 19:38:56.749079  475149 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key
	I1009 19:38:56.749184  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:38:56.749218  475149 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:56.749226  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:56.749249  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:56.749270  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:56.749290  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:56.749330  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:38:56.749922  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:56.793839  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:56.836661  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:56.881185  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:56.944481  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:38:57.004468  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:57.065273  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:57.111675  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:38:57.159516  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:38:57.193974  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:57.224679  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:38:57.249240  475149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:57.263896  475149 ssh_runner.go:195] Run: openssl version
	I1009 19:38:57.271087  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:38:57.281107  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.285858  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.285969  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.329453  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:57.338547  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:38:57.347911  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.352745  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.352867  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.395027  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:57.405079  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:57.415089  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.422244  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.422359  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.481668  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:57.490775  475149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:57.495538  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:57.549723  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:57.601360  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:57.692400  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:57.791426  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:58.054948  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:58.224168  475149 kubeadm.go:400] StartCluster: {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:58.224323  475149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:58.224429  475149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:58.295613  475149 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:38:58.295690  475149 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:38:58.295718  475149 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:38:58.295736  475149 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:38:58.295767  475149 cri.go:89] found id: ""
	I1009 19:38:58.295849  475149 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:38:58.315263  475149 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:58Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:38:58.315427  475149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:58.352496  475149 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:58.352572  475149 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:58.352663  475149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:58.364117  475149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:58.364656  475149 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-678119" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:58.364824  475149 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-678119" cluster setting kubeconfig missing "no-preload-678119" context setting]
	I1009 19:38:58.365175  475149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.366813  475149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:58.383869  475149 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 19:38:58.383950  475149 kubeadm.go:601] duration metric: took 31.357412ms to restartPrimaryControlPlane
	I1009 19:38:58.383987  475149 kubeadm.go:402] duration metric: took 159.827914ms to StartCluster
	I1009 19:38:58.384023  475149 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.384115  475149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:58.384839  475149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.385108  475149 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:58.385553  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:58.385514  475149 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:58.385729  475149 addons.go:69] Setting storage-provisioner=true in profile "no-preload-678119"
	I1009 19:38:58.385756  475149 addons.go:238] Setting addon storage-provisioner=true in "no-preload-678119"
	W1009 19:38:58.385789  475149 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:38:58.385831  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.386818  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.387032  475149 addons.go:69] Setting dashboard=true in profile "no-preload-678119"
	I1009 19:38:58.387076  475149 addons.go:238] Setting addon dashboard=true in "no-preload-678119"
	W1009 19:38:58.387121  475149 addons.go:247] addon dashboard should already be in state true
	I1009 19:38:58.387163  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.387373  475149 addons.go:69] Setting default-storageclass=true in profile "no-preload-678119"
	I1009 19:38:58.387399  475149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-678119"
	I1009 19:38:58.387676  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.387752  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.391628  475149 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:58.396249  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:58.438333  475149 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:38:58.441306  475149 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:38:58.444158  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:38:58.444184  475149 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:38:58.444265  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.448911  475149 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:58.451904  475149 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:58.451928  475149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:58.451993  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.463067  475149 addons.go:238] Setting addon default-storageclass=true in "no-preload-678119"
	W1009 19:38:58.463094  475149 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:38:58.463121  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.463526  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.498859  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:58.520867  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:58.523284  475149 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:58.523308  475149 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:58.523377  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.563152  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.330054  472674 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:38:54.330209  472674 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:38:55.832042  472674 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501826405s
	I1009 19:38:55.840261  472674 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:38:55.840363  472674 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:38:55.840711  472674 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:38:55.840800  472674 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:38:58.863878  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:58.891983  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:38:58.892055  475149 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:38:58.998644  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:59.001284  475149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:59.021274  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:38:59.021348  475149 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:38:59.188119  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:38:59.188192  475149 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:38:59.378216  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:38:59.378279  475149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:38:59.465321  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:38:59.465395  475149 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:38:59.514487  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:38:59.514563  475149 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:38:59.558617  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:38:59.558699  475149 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:38:59.588190  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:38:59.588279  475149 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:38:59.618512  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:38:59.618593  475149 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:38:59.643713  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:39:00.610608  472674 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.769277023s
	I1009 19:39:04.343086  472674 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.501851946s
	I1009 19:39:06.343526  472674 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502694801s
	I1009 19:39:06.369876  472674 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:39:06.387294  472674 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:39:06.405800  472674 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:39:06.406305  472674 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-779570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:39:06.423536  472674 kubeadm.go:318] [bootstrap-token] Using token: lmcsj0.9sm8uir04wanmzmq
	I1009 19:39:06.426543  472674 out.go:252]   - Configuring RBAC rules ...
	I1009 19:39:06.426674  472674 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:39:06.436263  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:39:06.447728  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:39:06.452470  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:39:06.460667  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:39:06.465978  472674 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:39:06.754038  472674 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:39:07.255177  472674 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:39:07.755329  472674 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:39:07.756969  472674 kubeadm.go:318] 
	I1009 19:39:07.757057  472674 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:39:07.757082  472674 kubeadm.go:318] 
	I1009 19:39:07.757168  472674 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:39:07.757178  472674 kubeadm.go:318] 
	I1009 19:39:07.757205  472674 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:39:07.757636  472674 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:39:07.757707  472674 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:39:07.757718  472674 kubeadm.go:318] 
	I1009 19:39:07.757775  472674 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:39:07.757784  472674 kubeadm.go:318] 
	I1009 19:39:07.757834  472674 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:39:07.757841  472674 kubeadm.go:318] 
	I1009 19:39:07.757895  472674 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:39:07.757978  472674 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:39:07.758053  472674 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:39:07.758062  472674 kubeadm.go:318] 
	I1009 19:39:07.758357  472674 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:39:07.758448  472674 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:39:07.758458  472674 kubeadm.go:318] 
	I1009 19:39:07.758721  472674 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lmcsj0.9sm8uir04wanmzmq \
	I1009 19:39:07.758838  472674 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:39:07.759027  472674 kubeadm.go:318] 	--control-plane 
	I1009 19:39:07.759041  472674 kubeadm.go:318] 
	I1009 19:39:07.759298  472674 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:39:07.759310  472674 kubeadm.go:318] 
	I1009 19:39:07.759586  472674 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lmcsj0.9sm8uir04wanmzmq \
	I1009 19:39:07.759866  472674 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:39:07.775985  472674 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:39:07.776271  472674 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:39:07.776415  472674 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:39:07.776431  472674 cni.go:84] Creating CNI manager for ""
	I1009 19:39:07.776440  472674 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:39:07.795000  472674 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:39:07.801486  472674 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:39:07.811908  472674 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:39:07.811932  472674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:39:07.836560  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:39:08.499348  472674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:39:08.499480  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:08.499543  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-779570 minikube.k8s.io/updated_at=2025_10_09T19_39_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=embed-certs-779570 minikube.k8s.io/primary=true
	I1009 19:39:08.888724  472674 ops.go:34] apiserver oom_adj: -16
	I1009 19:39:08.888844  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:09.147612  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.283704721s)
	I1009 19:39:09.147678  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.148965085s)
	I1009 19:39:09.147979  475149 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.146620107s)
	I1009 19:39:09.148013  475149 node_ready.go:35] waiting up to 6m0s for node "no-preload-678119" to be "Ready" ...
	I1009 19:39:09.211602  475149 node_ready.go:49] node "no-preload-678119" is "Ready"
	I1009 19:39:09.211632  475149 node_ready.go:38] duration metric: took 63.599366ms for node "no-preload-678119" to be "Ready" ...
	I1009 19:39:09.211646  475149 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:39:09.211706  475149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:39:09.337873  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.6940585s)
	I1009 19:39:09.337951  475149 api_server.go:72] duration metric: took 10.95271973s to wait for apiserver process to appear ...
	I1009 19:39:09.338027  475149 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:39:09.338046  475149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:39:09.341094  475149 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-678119 addons enable metrics-server
	
	I1009 19:39:09.343577  475149 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 19:39:09.389493  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:09.889054  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:10.389575  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:10.889661  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:11.389592  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:11.889704  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:12.032460  472674 kubeadm.go:1113] duration metric: took 3.533024214s to wait for elevateKubeSystemPrivileges
	I1009 19:39:12.032488  472674 kubeadm.go:402] duration metric: took 25.214702493s to StartCluster
	I1009 19:39:12.032523  472674 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:39:12.032587  472674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:39:12.034016  472674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:39:12.034541  472674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:39:12.034543  472674 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:39:12.034835  472674 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:39:12.034892  472674 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:39:12.034954  472674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-779570"
	I1009 19:39:12.034971  472674 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-779570"
	I1009 19:39:12.034995  472674 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:39:12.035486  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.036060  472674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-779570"
	I1009 19:39:12.036083  472674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-779570"
	I1009 19:39:12.036402  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.038798  472674 out.go:179] * Verifying Kubernetes components...
	I1009 19:39:12.042674  472674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:39:12.077716  472674 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:39:12.081562  472674 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:39:12.081588  472674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:39:12.081672  472674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:39:12.092287  472674 addons.go:238] Setting addon default-storageclass=true in "embed-certs-779570"
	I1009 19:39:12.092333  472674 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:39:12.092764  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.131640  472674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:39:12.139869  472674 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:39:12.139893  472674 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:39:12.140052  472674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:39:12.171162  472674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:39:12.453116  472674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:39:12.491524  472674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:39:12.491590  472674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:39:12.523534  472674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:39:13.550205  472674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.097009245s)
	I1009 19:39:13.550231  472674 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.058671599s)
	I1009 19:39:13.550270  472674 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.058665282s)
	I1009 19:39:13.550282  472674 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 19:39:13.551726  472674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:39:13.552388  472674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.028816669s)
	I1009 19:39:13.644696  472674 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:39:09.346473  475149 addons.go:514] duration metric: took 10.96095221s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 19:39:09.361646  475149 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:39:09.361682  475149 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:39:09.838189  475149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:39:09.846413  475149 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:39:09.847470  475149 api_server.go:141] control plane version: v1.34.1
	I1009 19:39:09.847496  475149 api_server.go:131] duration metric: took 509.460385ms to wait for apiserver health ...
	I1009 19:39:09.847507  475149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:39:09.851215  475149 system_pods.go:59] 8 kube-system pods found
	I1009 19:39:09.851256  475149 system_pods.go:61] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:09.851265  475149 system_pods.go:61] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:39:09.851302  475149 system_pods.go:61] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:39:09.851312  475149 system_pods.go:61] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:39:09.851324  475149 system_pods.go:61] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:39:09.851329  475149 system_pods.go:61] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:39:09.851336  475149 system_pods.go:61] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:39:09.851344  475149 system_pods.go:61] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:39:09.851351  475149 system_pods.go:74] duration metric: took 3.837394ms to wait for pod list to return data ...
	I1009 19:39:09.851379  475149 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:39:09.854041  475149 default_sa.go:45] found service account: "default"
	I1009 19:39:09.854065  475149 default_sa.go:55] duration metric: took 2.679038ms for default service account to be created ...
	I1009 19:39:09.854076  475149 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:39:09.856968  475149 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:09.857000  475149 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:09.857029  475149 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:39:09.857052  475149 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:39:09.857059  475149 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:39:09.857066  475149 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:39:09.857076  475149 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:39:09.857083  475149 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:39:09.857091  475149 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:39:09.857113  475149 system_pods.go:126] duration metric: took 3.029943ms to wait for k8s-apps to be running ...
	I1009 19:39:09.857129  475149 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:39:09.857202  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:39:09.872843  475149 system_svc.go:56] duration metric: took 15.704951ms WaitForService to wait for kubelet
	I1009 19:39:09.872874  475149 kubeadm.go:586] duration metric: took 11.487705051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:39:09.872892  475149 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:39:09.876212  475149 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:39:09.876243  475149 node_conditions.go:123] node cpu capacity is 2
	I1009 19:39:09.876257  475149 node_conditions.go:105] duration metric: took 3.358365ms to run NodePressure ...
	I1009 19:39:09.876269  475149 start.go:241] waiting for startup goroutines ...
	I1009 19:39:09.876277  475149 start.go:246] waiting for cluster config update ...
	I1009 19:39:09.876288  475149 start.go:255] writing updated cluster config ...
	I1009 19:39:09.876587  475149 ssh_runner.go:195] Run: rm -f paused
	I1009 19:39:09.880841  475149 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:09.884250  475149 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:39:11.889457  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	I1009 19:39:13.647509  472674 addons.go:514] duration metric: took 1.612597289s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:39:14.059255  472674 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-779570" context rescaled to 1 replicas
	W1009 19:39:13.890731  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:16.390611  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:15.557648  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:18.055169  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:18.891032  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:20.892125  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:23.394200  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:20.062765  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:22.555505  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:25.889428  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:27.893240  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:25.054817  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:27.054867  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:30.390232  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:32.890032  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:29.555075  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:32.055287  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:34.056012  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:34.890646  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:37.389399  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:36.554953  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:38.555125  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:39.396416  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:41.890687  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:41.055629  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:43.554918  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:44.389704  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:46.390038  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:45.555610  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:48.054785  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:48.889857  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	I1009 19:39:49.392625  475149 pod_ready.go:94] pod "coredns-66bc5c9577-cfmf8" is "Ready"
	I1009 19:39:49.392715  475149 pod_ready.go:86] duration metric: took 39.508438777s for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.396449  475149 pod_ready.go:83] waiting for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.402278  475149 pod_ready.go:94] pod "etcd-no-preload-678119" is "Ready"
	I1009 19:39:49.402304  475149 pod_ready.go:86] duration metric: took 5.826472ms for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.405956  475149 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.410555  475149 pod_ready.go:94] pod "kube-apiserver-no-preload-678119" is "Ready"
	I1009 19:39:49.410587  475149 pod_ready.go:86] duration metric: took 4.602417ms for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.412948  475149 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.588332  475149 pod_ready.go:94] pod "kube-controller-manager-no-preload-678119" is "Ready"
	I1009 19:39:49.588360  475149 pod_ready.go:86] duration metric: took 175.386297ms for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.788632  475149 pod_ready.go:83] waiting for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.189292  475149 pod_ready.go:94] pod "kube-proxy-cf6gt" is "Ready"
	I1009 19:39:50.189321  475149 pod_ready.go:86] duration metric: took 400.662881ms for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.388545  475149 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.788692  475149 pod_ready.go:94] pod "kube-scheduler-no-preload-678119" is "Ready"
	I1009 19:39:50.788721  475149 pod_ready.go:86] duration metric: took 400.15168ms for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.788735  475149 pod_ready.go:40] duration metric: took 40.907858692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:50.844656  475149 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:39:50.847818  475149 out.go:179] * Done! kubectl is now configured to use "no-preload-678119" cluster and "default" namespace by default
	W1009 19:39:50.054842  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:52.055250  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:54.555262  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	I1009 19:39:55.556813  472674 node_ready.go:49] node "embed-certs-779570" is "Ready"
	I1009 19:39:55.556840  472674 node_ready.go:38] duration metric: took 42.005077378s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:39:55.556854  472674 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:39:55.556916  472674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:39:55.571118  472674 api_server.go:72] duration metric: took 43.536495654s to wait for apiserver process to appear ...
	I1009 19:39:55.571141  472674 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:39:55.571160  472674 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:39:55.581899  472674 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:39:55.583069  472674 api_server.go:141] control plane version: v1.34.1
	I1009 19:39:55.583099  472674 api_server.go:131] duration metric: took 11.951146ms to wait for apiserver health ...
	I1009 19:39:55.583110  472674 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:39:55.586206  472674 system_pods.go:59] 8 kube-system pods found
	I1009 19:39:55.586242  472674 system_pods.go:61] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.586250  472674 system_pods.go:61] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.586257  472674 system_pods.go:61] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.586262  472674 system_pods.go:61] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.586267  472674 system_pods.go:61] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.586273  472674 system_pods.go:61] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.586277  472674 system_pods.go:61] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.586284  472674 system_pods.go:61] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.586299  472674 system_pods.go:74] duration metric: took 3.182256ms to wait for pod list to return data ...
	I1009 19:39:55.586309  472674 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:39:55.589128  472674 default_sa.go:45] found service account: "default"
	I1009 19:39:55.589156  472674 default_sa.go:55] duration metric: took 2.840943ms for default service account to be created ...
	I1009 19:39:55.589166  472674 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:39:55.593610  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:55.593642  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.593648  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.593655  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.593659  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.593664  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.593668  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.593673  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.593679  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.593704  472674 retry.go:31] will retry after 245.493217ms: missing components: kube-dns
	I1009 19:39:55.844658  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:55.844692  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.844699  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.844722  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.844727  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.844732  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.844736  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.844740  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.844746  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.844761  472674 retry.go:31] will retry after 270.704249ms: missing components: kube-dns
	I1009 19:39:56.120386  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:56.120421  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:56.120428  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:56.120434  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:56.120439  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:56.120445  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:56.120449  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:56.120453  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:56.120459  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:56.120497  472674 retry.go:31] will retry after 482.359976ms: missing components: kube-dns
	I1009 19:39:56.606422  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:56.606457  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:56.606465  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:56.606471  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:56.606475  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:56.606480  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:56.606484  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:56.606489  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:56.606495  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:56.606514  472674 retry.go:31] will retry after 538.519972ms: missing components: kube-dns
	I1009 19:39:57.150098  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:57.150205  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running
	I1009 19:39:57.150219  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:57.150225  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:57.150232  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:57.150242  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:57.150247  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:57.150251  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:57.150255  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:39:57.150266  472674 system_pods.go:126] duration metric: took 1.56109474s to wait for k8s-apps to be running ...
	I1009 19:39:57.150279  472674 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:39:57.150332  472674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:39:57.168317  472674 system_svc.go:56] duration metric: took 18.028148ms WaitForService to wait for kubelet
	I1009 19:39:57.168348  472674 kubeadm.go:586] duration metric: took 45.133730211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:39:57.168367  472674 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:39:57.171899  472674 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:39:57.171942  472674 node_conditions.go:123] node cpu capacity is 2
	I1009 19:39:57.171957  472674 node_conditions.go:105] duration metric: took 3.584132ms to run NodePressure ...
	I1009 19:39:57.171969  472674 start.go:241] waiting for startup goroutines ...
	I1009 19:39:57.171977  472674 start.go:246] waiting for cluster config update ...
	I1009 19:39:57.171990  472674 start.go:255] writing updated cluster config ...
	I1009 19:39:57.172347  472674 ssh_runner.go:195] Run: rm -f paused
	I1009 19:39:57.177547  472674 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:57.182493  472674 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.196579  472674 pod_ready.go:94] pod "coredns-66bc5c9577-4c9xb" is "Ready"
	I1009 19:39:57.196608  472674 pod_ready.go:86] duration metric: took 14.085128ms for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.199978  472674 pod_ready.go:83] waiting for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.208562  472674 pod_ready.go:94] pod "etcd-embed-certs-779570" is "Ready"
	I1009 19:39:57.208600  472674 pod_ready.go:86] duration metric: took 8.594816ms for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.211279  472674 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.217066  472674 pod_ready.go:94] pod "kube-apiserver-embed-certs-779570" is "Ready"
	I1009 19:39:57.217103  472674 pod_ready.go:86] duration metric: took 5.798296ms for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.219580  472674 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.582171  472674 pod_ready.go:94] pod "kube-controller-manager-embed-certs-779570" is "Ready"
	I1009 19:39:57.582252  472674 pod_ready.go:86] duration metric: took 362.649708ms for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.782320  472674 pod_ready.go:83] waiting for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.181328  472674 pod_ready.go:94] pod "kube-proxy-sp4bk" is "Ready"
	I1009 19:39:58.181359  472674 pod_ready.go:86] duration metric: took 399.01215ms for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.381832  472674 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.781804  472674 pod_ready.go:94] pod "kube-scheduler-embed-certs-779570" is "Ready"
	I1009 19:39:58.781835  472674 pod_ready.go:86] duration metric: took 399.975272ms for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.781847  472674 pod_ready.go:40] duration metric: took 1.604264696s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:58.836096  472674 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:39:58.841462  472674 out.go:179] * Done! kubectl is now configured to use "embed-certs-779570" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.362975997Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.366264412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.366301163Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.36632573Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.369419944Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.369456942Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.369532151Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.372435766Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.372469654Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.372495229Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.37566192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:39:48 no-preload-678119 crio[652]: time="2025-10-09T19:39:48.375697375Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.174820772Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e2b8df12-ae1d-4522-9660-769a17ccb93c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.187309778Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=24ceca1a-e4d4-4445-91e6-110e22ee5aa5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.193784586Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper" id=9ab5ecbf-fe6d-43d9-b8be-d8e25d7ca0f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.194072015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.203366909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.204034018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.224794703Z" level=info msg="Created container 96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper" id=9ab5ecbf-fe6d-43d9-b8be-d8e25d7ca0f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.228045038Z" level=info msg="Starting container: 96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521" id=71ad50b5-156d-4651-b7f4-bffbb947bbb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.231452175Z" level=info msg="Started container" PID=1715 containerID=96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper id=71ad50b5-156d-4651-b7f4-bffbb947bbb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3cd054d4e49dd57196b06059733d2b460bc079f3b40436a2b37c6ca8feeefe0
	Oct 09 19:39:57 no-preload-678119 conmon[1713]: conmon 96f3bd2101071843a9c2 <ninfo>: container 1715 exited with status 1
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.540504763Z" level=info msg="Removing container: eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1" id=014bacf8-e9e3-489e-81ef-d0d7d7b83d40 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.54805658Z" level=info msg="Error loading conmon cgroup of container eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1: cgroup deleted" id=014bacf8-e9e3-489e-81ef-d0d7d7b83d40 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:39:57 no-preload-678119 crio[652]: time="2025-10-09T19:39:57.553878539Z" level=info msg="Removed container eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v/dashboard-metrics-scraper" id=014bacf8-e9e3-489e-81ef-d0d7d7b83d40 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	96f3bd2101071       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   3                   a3cd054d4e49d       dashboard-metrics-scraper-6ffb444bf9-96r8v   kubernetes-dashboard
	4110f531550ee       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           29 seconds ago       Running             storage-provisioner         2                   10349560f3977       storage-provisioner                          kube-system
	596a04caf4eb7       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   4e4b5a451505b       kubernetes-dashboard-855c9754f9-6zf28        kubernetes-dashboard
	e5baf5832a360       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   607e94b520909       coredns-66bc5c9577-cfmf8                     kube-system
	4f359dfcc7a8f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   1914904cb7c3d       busybox                                      default
	b244ecd4999c0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   af03f47df2cca       kindnet-rg6kc                                kube-system
	5ec0cc7ddf9a2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   f0c8e30b405de       kube-proxy-cf6gt                             kube-system
	3d1a1090c5752       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   10349560f3977       storage-provisioner                          kube-system
	776d5d2a3dd24       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   84263d0de6094       kube-controller-manager-no-preload-678119    kube-system
	dffc78179e9e6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0920abad12200       kube-scheduler-no-preload-678119             kube-system
	3c637faddd2e8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b2e59a413c49a       kube-apiserver-no-preload-678119             kube-system
	e31f57ead2454       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   363cd8e1a22f5       etcd-no-preload-678119                       kube-system
	
	
	==> coredns [e5baf5832a360386e095d5e41a8324cfa2d11a9c88e8f2319bfc1252311ac7b4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42474 - 14178 "HINFO IN 7820871439522576481.5862110629079179430. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023368941s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-678119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-678119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=no-preload-678119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_38_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-678119
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:39:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:37:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:39:36 +0000   Thu, 09 Oct 2025 19:38:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-678119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b75a7a9ef584b398f5fa81ed5aad07c
	  System UUID:                b33fed70-8b70-482e-bac9-78dc101bc1cd
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 coredns-66bc5c9577-cfmf8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 etcd-no-preload-678119                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-rg6kc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-no-preload-678119              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-no-preload-678119     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-cf6gt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-no-preload-678119              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-96r8v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6zf28         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s (x8 over 2m22s)  kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m10s                  kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s                  kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m10s                  kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m6s                   node-controller  Node no-preload-678119 event: Registered Node no-preload-678119 in Controller
	  Normal   NodeReady                111s                   kubelet          Node no-preload-678119 status is now: NodeReady
	  Normal   Starting                 73s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node no-preload-678119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node no-preload-678119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)      kubelet          Node no-preload-678119 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node no-preload-678119 event: Registered Node no-preload-678119 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e] <==
	{"level":"warn","ts":"2025-10-09T19:39:02.831740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.883389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.949843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.970476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.090485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.115234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.160291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.191269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.243126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.284793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.342231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.390306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.463844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.518195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.549227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.603242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.701102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.722810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.784044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.822944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:03.958760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.002672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.052997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.108273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:04.251983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36612","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:40:09 up  2:22,  0 user,  load average: 2.92, 2.85, 2.30
	Linux no-preload-678119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b244ecd4999c0ff9c2715a575dc3c0a4f78e1e61014e5148fa3a221fcd2c5d67] <==
	I1009 19:39:08.070786       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:39:08.089213       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 19:39:08.089586       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:39:08.089603       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:39:08.089619       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:39:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:39:08.353473       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:39:08.353492       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:39:08.353501       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:39:08.353789       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:39:38.354038       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:39:38.354054       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:39:38.354198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:39:38.354250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 19:39:39.653688       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:39:39.653801       1 metrics.go:72] Registering metrics
	I1009 19:39:39.653891       1 controller.go:711] "Syncing nftables rules"
	I1009 19:39:48.354288       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:48.354413       1 main.go:301] handling current node
	I1009 19:39:58.352672       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:58.352783       1 main.go:301] handling current node
	I1009 19:40:08.354448       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:08.354484       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6] <==
	I1009 19:39:06.252474       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:39:06.267351       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:39:06.267589       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:39:06.273985       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1009 19:39:06.276088       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:39:06.286051       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:39:06.286107       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:39:06.310888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:39:06.330401       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:39:06.330670       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 19:39:06.330697       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:39:06.330745       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:39:06.331054       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:39:06.341108       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:39:07.053996       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:39:07.097892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:39:08.490060       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:39:08.834917       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:39:08.988539       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:39:09.063686       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:39:09.301888       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.59.16"}
	I1009 19:39:09.329293       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.246.190"}
	I1009 19:39:11.967704       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:39:12.123282       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:39:12.172961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3] <==
	I1009 19:39:11.753627       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:39:11.759109       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:39:11.759341       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:39:11.759395       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:39:11.759646       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:39:11.759707       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:39:11.763029       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:39:11.778431       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:39:11.780779       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 19:39:11.781969       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:39:11.782398       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 19:39:11.782478       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 19:39:11.782526       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 19:39:11.782592       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 19:39:11.782621       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 19:39:11.785709       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:39:11.785788       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:39:11.785938       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:39:11.786030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-678119"
	I1009 19:39:11.786555       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:39:11.803694       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:39:11.803994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:39:11.809398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:39:11.809430       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:39:11.809437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5ec0cc7ddf9a258bdb958420cd6e2751c0d81f215b0fa96445c52c0fcc7c6d19] <==
	I1009 19:39:09.533409       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:39:09.635849       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:39:09.741469       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:39:09.741581       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 19:39:09.741683       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:39:09.770984       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:39:09.771121       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:39:09.774826       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:39:09.775187       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:39:09.775396       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:39:09.777055       1 config.go:200] "Starting service config controller"
	I1009 19:39:09.777112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:39:09.777173       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:39:09.777200       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:39:09.777234       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:39:09.777259       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:39:09.778034       1 config.go:309] "Starting node config controller"
	I1009 19:39:09.778085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:39:09.778115       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:39:09.878508       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:39:09.878598       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:39:09.878621       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b] <==
	I1009 19:39:05.166621       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:39:09.708064       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:39:09.708100       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:39:09.713596       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:39:09.713685       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:39:09.713711       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:39:09.713753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:39:09.714598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:39:09.714622       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:39:09.714639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:39:09.714645       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:39:09.813802       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:39:09.815479       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:39:09.815532       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:39:18 no-preload-678119 kubelet[773]: I1009 19:39:18.409851     773 scope.go:117] "RemoveContainer" containerID="adc63c6b7d741e69714ffde7280b5f9411d06368984b8fb920fe8ba699135f21"
	Oct 09 19:39:19 no-preload-678119 kubelet[773]: I1009 19:39:19.416032     773 scope.go:117] "RemoveContainer" containerID="adc63c6b7d741e69714ffde7280b5f9411d06368984b8fb920fe8ba699135f21"
	Oct 09 19:39:19 no-preload-678119 kubelet[773]: I1009 19:39:19.416431     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:19 no-preload-678119 kubelet[773]: E1009 19:39:19.416574     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:20 no-preload-678119 kubelet[773]: I1009 19:39:20.427496     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:20 no-preload-678119 kubelet[773]: E1009 19:39:20.430924     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:22 no-preload-678119 kubelet[773]: I1009 19:39:22.624451     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:22 no-preload-678119 kubelet[773]: E1009 19:39:22.624631     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.172966     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.469728     773 scope.go:117] "RemoveContainer" containerID="8eb594ef4a468b255f84e753db8c4b4ae365fc0134e903f44abb2d958d132d66"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.470031     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: E1009 19:39:33.470229     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:33 no-preload-678119 kubelet[773]: I1009 19:39:33.496974     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6zf28" podStartSLOduration=11.657728668 podStartE2EDuration="21.496957233s" podCreationTimestamp="2025-10-09 19:39:12 +0000 UTC" firstStartedPulling="2025-10-09 19:39:12.730154041 +0000 UTC m=+15.988356694" lastFinishedPulling="2025-10-09 19:39:22.569382606 +0000 UTC m=+25.827585259" observedRunningTime="2025-10-09 19:39:23.457164366 +0000 UTC m=+26.715367036" watchObservedRunningTime="2025-10-09 19:39:33.496957233 +0000 UTC m=+36.755159886"
	Oct 09 19:39:39 no-preload-678119 kubelet[773]: I1009 19:39:39.489416     773 scope.go:117] "RemoveContainer" containerID="3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513"
	Oct 09 19:39:42 no-preload-678119 kubelet[773]: I1009 19:39:42.624875     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:42 no-preload-678119 kubelet[773]: E1009 19:39:42.625496     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: I1009 19:39:57.172524     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: I1009 19:39:57.538610     773 scope.go:117] "RemoveContainer" containerID="eaa0694529e9433ca13f7675053e7e877be03e0864b49f1ef8760536135d4bf1"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: I1009 19:39:57.538938     773 scope.go:117] "RemoveContainer" containerID="96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	Oct 09 19:39:57 no-preload-678119 kubelet[773]: E1009 19:39:57.539091     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:40:02 no-preload-678119 kubelet[773]: I1009 19:40:02.626692     773 scope.go:117] "RemoveContainer" containerID="96f3bd2101071843a9c2c26a0d124539831d8b90e1c145d0c8195e7eef800521"
	Oct 09 19:40:02 no-preload-678119 kubelet[773]: E1009 19:40:02.626890     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-96r8v_kubernetes-dashboard(c0a9307f-6da5-4568-bf76-c6c95033d645)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-96r8v" podUID="c0a9307f-6da5-4568-bf76-c6c95033d645"
	Oct 09 19:40:03 no-preload-678119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:40:03 no-preload-678119 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:40:03 no-preload-678119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [596a04caf4eb777f4ae839f82d27926841fdbc20c77d098530cd6c5998be36fc] <==
	2025/10/09 19:39:22 Using namespace: kubernetes-dashboard
	2025/10/09 19:39:22 Using in-cluster config to connect to apiserver
	2025/10/09 19:39:22 Using secret token for csrf signing
	2025/10/09 19:39:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:39:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:39:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 19:39:22 Generating JWE encryption key
	2025/10/09 19:39:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:39:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:39:23 Initializing JWE encryption key from synchronized object
	2025/10/09 19:39:23 Creating in-cluster Sidecar client
	2025/10/09 19:39:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:39:23 Serving insecurely on HTTP port: 9090
	2025/10/09 19:39:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:39:22 Starting overwatch
	
	
	==> storage-provisioner [3d1a1090c5752c6b20ae312cdcd4f613e1bd218e9761dacc7b73be9f83733513] <==
	I1009 19:39:08.478397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:39:38.481029       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4110f531550ee3bba9d39633377e98dc4d9b90cbabe48ed631473d7cc3c34d1f] <==
	W1009 19:39:39.567596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:43.025734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:47.286335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:50.884801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:53.938781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.960670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.967535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:39:56.967902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:39:56.968165       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-678119_c912bbb6-7927-4397-ba84-03ab1213976e!
	I1009 19:39:56.969061       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56611be6-d733-49ae-861a-2846139fa527", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-678119_c912bbb6-7927-4397-ba84-03ab1213976e became leader
	W1009 19:39:56.977206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.988102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:39:57.068426       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-678119_c912bbb6-7927-4397-ba84-03ab1213976e!
	W1009 19:39:58.991554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:59.000003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:01.014517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:01.027089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:03.030486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:03.037612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:05.048041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:05.053915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:07.057155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:07.068586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:09.073457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:09.104129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-678119 -n no-preload-678119
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-678119 -n no-preload-678119: exit status 2 (569.484265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-678119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (366.809335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-779570 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-779570 describe deploy/metrics-server -n kube-system: exit status 1 (152.817882ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-779570 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-779570
helpers_test.go:243: (dbg) docker inspect embed-certs-779570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0",
	        "Created": "2025-10-09T19:38:39.409674246Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 473586,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:38:39.471254512Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/hosts",
	        "LogPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0-json.log",
	        "Name": "/embed-certs-779570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-779570:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-779570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0",
	                "LowerDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-779570",
	                "Source": "/var/lib/docker/volumes/embed-certs-779570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-779570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-779570",
	                "name.minikube.sigs.k8s.io": "embed-certs-779570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4cc1b4fd378bcc8e0298641cc215ef0ff240e88ce372c039b92371e86e7ff93f",
	            "SandboxKey": "/var/run/docker/netns/4cc1b4fd378b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-779570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:20:96:c8:6c:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28e70e683a9e94690b95b84e3e58ac8af1a42ba0d4f6a915911a12474f440d3d",
	                    "EndpointID": "b54b1e6296058d2647a1ca798a0281cbb89756a93f6ee953ec9b8043ac3b3188",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-779570",
	                        "81a5b0bcbd3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-779570 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-779570 logs -n 25: (1.638262075s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p force-systemd-env-028248                                                                                                                                                                                                                   │ force-systemd-env-028248 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ cert-options-983220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ ssh     │ -p cert-options-983220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ delete  │ -p cert-options-983220                                                                                                                                                                                                                        │ cert-options-983220      │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │ 09 Oct 25 19:37 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-271815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-271815 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815   │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119        │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570       │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:38:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:38:48.813211  475149 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:48.813414  475149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:48.813442  475149 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:48.813462  475149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:48.813744  475149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:38:48.814213  475149 out.go:368] Setting JSON to false
	I1009 19:38:48.815200  475149 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8480,"bootTime":1760030249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:38:48.815292  475149 start.go:141] virtualization:  
	I1009 19:38:48.818274  475149 out.go:179] * [no-preload-678119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:38:48.822174  475149 notify.go:220] Checking for updates...
	I1009 19:38:48.825926  475149 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:38:48.828915  475149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:48.831842  475149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:48.834784  475149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:38:48.837573  475149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:38:48.840448  475149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:48.843897  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:48.844471  475149 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:38:48.874806  475149 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:38:48.874928  475149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:48.987721  475149 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:38:48.978239172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:48.987828  475149 docker.go:318] overlay module found
	I1009 19:38:48.991227  475149 out.go:179] * Using the docker driver based on existing profile
	I1009 19:38:48.994112  475149 start.go:305] selected driver: docker
	I1009 19:38:48.994277  475149 start.go:925] validating driver "docker" against &{Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:48.994394  475149 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:48.995048  475149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:38:49.091619  475149 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:38:49.077776228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:38:49.091980  475149 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:38:49.092010  475149 cni.go:84] Creating CNI manager for ""
	I1009 19:38:49.092070  475149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:49.092114  475149 start.go:349] cluster config:
	{Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:49.095401  475149 out.go:179] * Starting "no-preload-678119" primary control-plane node in "no-preload-678119" cluster
	I1009 19:38:49.098259  475149 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:38:49.101090  475149 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:38:49.103853  475149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:49.104001  475149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/config.json ...
	I1009 19:38:49.104325  475149 cache.go:107] acquiring lock: {Name:mkf75ee142286ad1bdc0e9c0aa3f48e64fafdbe8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104424  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 19:38:49.104438  475149 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.819µs
	I1009 19:38:49.104456  475149 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 19:38:49.104469  475149 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:38:49.104663  475149 cache.go:107] acquiring lock: {Name:mk25f7c277db514655a4eee10ac8e6ce05f41968 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104735  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1009 19:38:49.104747  475149 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 90.06µs
	I1009 19:38:49.104755  475149 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1009 19:38:49.104767  475149 cache.go:107] acquiring lock: {Name:mkf23fc2fd145cfb44f93f7bd77348bc96e294c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104802  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1009 19:38:49.104812  475149 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 46.36µs
	I1009 19:38:49.104819  475149 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1009 19:38:49.104828  475149 cache.go:107] acquiring lock: {Name:mkf1b5cecee0ad7719ec268fb80d35042f8ea9ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104861  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1009 19:38:49.104870  475149 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.503µs
	I1009 19:38:49.104876  475149 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1009 19:38:49.104885  475149 cache.go:107] acquiring lock: {Name:mk6d2ee36782fdd52dfc3b1b6d6b824788680c23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104911  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1009 19:38:49.104920  475149 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 36.333µs
	I1009 19:38:49.104927  475149 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1009 19:38:49.104937  475149 cache.go:107] acquiring lock: {Name:mkbd960140c8f1b68fbb8e3db795bee47fe958c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.104967  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1009 19:38:49.104976  475149 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.796µs
	I1009 19:38:49.104989  475149 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1009 19:38:49.105002  475149 cache.go:107] acquiring lock: {Name:mkb6bcbed58f86de43d5846c736eec4c3f941cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.105034  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1009 19:38:49.105043  475149 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.593µs
	I1009 19:38:49.105105  475149 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1009 19:38:49.105128  475149 cache.go:107] acquiring lock: {Name:mkae0e70582a2b9e175be8a94ecf46f19839bead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.105176  475149 cache.go:115] /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1009 19:38:49.105187  475149 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 61.49µs
	I1009 19:38:49.105204  475149 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1009 19:38:49.105211  475149 cache.go:87] Successfully saved all images to host disk.
	I1009 19:38:49.127767  475149 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:38:49.127793  475149 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:38:49.127807  475149 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:38:49.127832  475149 start.go:360] acquireMachinesLock for no-preload-678119: {Name:mk55480b0ad862c0c372f2026083e24864004a2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:38:49.127889  475149 start.go:364] duration metric: took 37.367µs to acquireMachinesLock for "no-preload-678119"
	I1009 19:38:49.127913  475149 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:38:49.127918  475149 fix.go:54] fixHost starting: 
	I1009 19:38:49.128192  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:49.160203  475149 fix.go:112] recreateIfNeeded on no-preload-678119: state=Stopped err=<nil>
	W1009 19:38:49.160241  475149 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:38:47.090536  472674 out.go:252]   - Generating certificates and keys ...
	I1009 19:38:47.090649  472674 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:38:47.090723  472674 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:38:47.663776  472674 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:38:47.916921  472674 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:38:48.398750  472674 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:38:48.896575  472674 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:38:49.033784  472674 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:38:49.034339  472674 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-779570 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:38:49.163507  475149 out.go:252] * Restarting existing docker container for "no-preload-678119" ...
	I1009 19:38:49.163597  475149 cli_runner.go:164] Run: docker start no-preload-678119
	I1009 19:38:49.501880  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:49.533606  475149 kic.go:430] container "no-preload-678119" state is running.
	I1009 19:38:49.534010  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:49.558157  475149 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/config.json ...
	I1009 19:38:49.558394  475149 machine.go:93] provisionDockerMachine start ...
	I1009 19:38:49.558454  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:49.592483  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:49.592796  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:49.592814  475149 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:38:49.593389  475149 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34622->127.0.0.1:33440: read: connection reset by peer
	I1009 19:38:52.746325  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-678119
	
	I1009 19:38:52.746353  475149 ubuntu.go:182] provisioning hostname "no-preload-678119"
	I1009 19:38:52.746422  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:52.771340  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:52.771712  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:52.771726  475149 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-678119 && echo "no-preload-678119" | sudo tee /etc/hostname
	I1009 19:38:52.958100  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-678119
	
	I1009 19:38:52.958350  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:52.987751  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:52.988125  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:52.988149  475149 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-678119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-678119/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-678119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:38:53.155148  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:38:53.155225  475149 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:38:53.155293  475149 ubuntu.go:190] setting up certificates
	I1009 19:38:53.155321  475149 provision.go:84] configureAuth start
	I1009 19:38:53.155442  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:53.180487  475149 provision.go:143] copyHostCerts
	I1009 19:38:53.180573  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:38:53.180583  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:38:53.180669  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:38:53.180767  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:38:53.180773  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:38:53.180801  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:38:53.180887  475149 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:38:53.180892  475149 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:38:53.180919  475149 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:38:53.180968  475149 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.no-preload-678119 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-678119]
	I1009 19:38:53.630178  475149 provision.go:177] copyRemoteCerts
	I1009 19:38:53.630299  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:38:53.630392  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:53.655020  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:53.759307  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:38:53.780546  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:38:53.803197  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:38:49.526581  472674 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:38:49.527749  472674 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-779570 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:38:50.123869  472674 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:38:50.546191  472674 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:38:51.177579  472674 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:38:51.181679  472674 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:38:51.741755  472674 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:38:52.608100  472674 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:38:52.946233  472674 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:38:53.250331  472674 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:38:54.122308  472674 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:38:54.128205  472674 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:38:54.131074  472674 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:38:54.134743  472674 out.go:252]   - Booting up control plane ...
	I1009 19:38:54.134867  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:38:54.135875  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:38:54.140792  472674 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:38:54.158742  472674 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:38:54.158867  472674 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:38:54.167147  472674 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:38:54.167465  472674 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:38:54.167692  472674 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:38:53.831635  475149 provision.go:87] duration metric: took 676.267515ms to configureAuth
	I1009 19:38:53.831724  475149 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:38:53.831966  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:53.832133  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:53.851555  475149 main.go:141] libmachine: Using SSH client type: native
	I1009 19:38:53.851868  475149 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1009 19:38:53.851889  475149 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:38:54.218941  475149 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:38:54.218968  475149 machine.go:96] duration metric: took 4.660565151s to provisionDockerMachine
	I1009 19:38:54.218980  475149 start.go:293] postStartSetup for "no-preload-678119" (driver="docker")
	I1009 19:38:54.218991  475149 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:38:54.219051  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:38:54.219114  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.253136  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.363253  475149 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:38:54.366741  475149 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:38:54.366779  475149 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:38:54.366790  475149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:38:54.366851  475149 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:38:54.366932  475149 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:38:54.367040  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:38:54.374917  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:38:54.394770  475149 start.go:296] duration metric: took 175.774144ms for postStartSetup
	I1009 19:38:54.394858  475149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:38:54.394902  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.419819  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.527507  475149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:38:54.532751  475149 fix.go:56] duration metric: took 5.404825073s for fixHost
	I1009 19:38:54.532777  475149 start.go:83] releasing machines lock for "no-preload-678119", held for 5.404874739s
	I1009 19:38:54.532866  475149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-678119
	I1009 19:38:54.551853  475149 ssh_runner.go:195] Run: cat /version.json
	I1009 19:38:54.551903  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.552171  475149 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:38:54.552236  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:54.587571  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.591973  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.786169  475149 ssh_runner.go:195] Run: systemctl --version
	I1009 19:38:54.797207  475149 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:38:54.844950  475149 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:38:54.850420  475149 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:38:54.850518  475149 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:38:54.859947  475149 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:38:54.859973  475149 start.go:495] detecting cgroup driver to use...
	I1009 19:38:54.860015  475149 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:38:54.860092  475149 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:38:54.877167  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:38:54.891860  475149 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:38:54.891931  475149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:38:54.909386  475149 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:38:54.929952  475149 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:38:55.130327  475149 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:38:55.323209  475149 docker.go:234] disabling docker service ...
	I1009 19:38:55.323343  475149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:38:55.346028  475149 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:38:55.365298  475149 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:38:55.539092  475149 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:38:55.674765  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:38:55.689880  475149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:38:55.712183  475149 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:38:55.712269  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.722072  475149 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:38:55.722162  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.731622  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.740336  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.749099  475149 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:38:55.756988  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.765950  475149 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.774195  475149 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:38:55.782943  475149 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:38:55.790578  475149 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:38:55.797880  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:55.926886  475149 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:38:56.114635  475149 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:38:56.114763  475149 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:38:56.121096  475149 start.go:563] Will wait 60s for crictl version
	I1009 19:38:56.121207  475149 ssh_runner.go:195] Run: which crictl
	I1009 19:38:56.130479  475149 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:38:56.170766  475149 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:38:56.170913  475149 ssh_runner.go:195] Run: crio --version
	I1009 19:38:56.216318  475149 ssh_runner.go:195] Run: crio --version
	I1009 19:38:56.266871  475149 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:38:56.269143  475149 cli_runner.go:164] Run: docker network inspect no-preload-678119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:38:56.291545  475149 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:38:56.295967  475149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:56.312988  475149 kubeadm.go:883] updating cluster {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:38:56.313106  475149 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:38:56.313155  475149 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:38:56.364876  475149 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:38:56.364904  475149 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:38:56.364912  475149 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 19:38:56.365004  475149 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-678119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:38:56.365089  475149 ssh_runner.go:195] Run: crio config
	I1009 19:38:56.430355  475149 cni.go:84] Creating CNI manager for ""
	I1009 19:38:56.430382  475149 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:38:56.430401  475149 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:38:56.430439  475149 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-678119 NodeName:no-preload-678119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:38:56.430591  475149 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-678119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:38:56.430676  475149 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:38:56.439902  475149 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:38:56.439985  475149 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:38:56.448388  475149 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:38:56.462540  475149 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:38:56.476519  475149 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 19:38:56.491082  475149 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:38:56.495043  475149 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:38:56.505554  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:56.720133  475149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:56.748686  475149 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119 for IP: 192.168.76.2
	I1009 19:38:56.748703  475149 certs.go:195] generating shared ca certs ...
	I1009 19:38:56.748722  475149 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:56.748855  475149 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:38:56.748902  475149 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:38:56.748909  475149 certs.go:257] generating profile certs ...
	I1009 19:38:56.748985  475149 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.key
	I1009 19:38:56.749043  475149 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key.7093ead7
	I1009 19:38:56.749079  475149 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key
	I1009 19:38:56.749184  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:38:56.749218  475149 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:38:56.749226  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:38:56.749249  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:38:56.749270  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:38:56.749290  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:38:56.749330  475149 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:38:56.749922  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:38:56.793839  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:38:56.836661  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:38:56.881185  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:38:56.944481  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:38:57.004468  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:38:57.065273  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:38:57.111675  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:38:57.159516  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:38:57.193974  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:38:57.224679  475149 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:38:57.249240  475149 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:38:57.263896  475149 ssh_runner.go:195] Run: openssl version
	I1009 19:38:57.271087  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:38:57.281107  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.285858  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.285969  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:38:57.329453  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:38:57.338547  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:38:57.347911  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.352745  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.352867  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:38:57.395027  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:38:57.405079  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:38:57.415089  475149 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.422244  475149 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.422359  475149 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:38:57.481668  475149 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:38:57.490775  475149 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:38:57.495538  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:38:57.549723  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:38:57.601360  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:38:57.692400  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:38:57.791426  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:38:58.054948  475149 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:38:58.224168  475149 kubeadm.go:400] StartCluster: {Name:no-preload-678119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-678119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:38:58.224323  475149 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:38:58.224429  475149 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:38:58.295613  475149 cri.go:89] found id: "776d5d2a3dd24c2952181e3e372a341d651c2047d1ac34637e609003e12200d3"
	I1009 19:38:58.295690  475149 cri.go:89] found id: "dffc78179e9e606448ab0d11318db3cf2d91837e1d2c585b2c4b8f60d128442b"
	I1009 19:38:58.295718  475149 cri.go:89] found id: "3c637faddd2e8f0fb8b314a21d59edf7c705bab7523189d37dae607dd830f8c6"
	I1009 19:38:58.295736  475149 cri.go:89] found id: "e31f57ead2454d1287fc0177a27679d783593dd9c0be034208b774b3d711c45e"
	I1009 19:38:58.295767  475149 cri.go:89] found id: ""
	I1009 19:38:58.295849  475149 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:38:58.315263  475149 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:38:58Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:38:58.315427  475149 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:38:58.352496  475149 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:38:58.352572  475149 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:38:58.352663  475149 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:38:58.364117  475149 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:38:58.364656  475149 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-678119" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:58.364824  475149 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-678119" cluster setting kubeconfig missing "no-preload-678119" context setting]
	I1009 19:38:58.365175  475149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.366813  475149 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:38:58.383869  475149 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 19:38:58.383950  475149 kubeadm.go:601] duration metric: took 31.357412ms to restartPrimaryControlPlane
	I1009 19:38:58.383987  475149 kubeadm.go:402] duration metric: took 159.827914ms to StartCluster
	I1009 19:38:58.384023  475149 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.384115  475149 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:38:58.384839  475149 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:38:58.385108  475149 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:38:58.385553  475149 config.go:182] Loaded profile config "no-preload-678119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:58.385514  475149 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:38:58.385729  475149 addons.go:69] Setting storage-provisioner=true in profile "no-preload-678119"
	I1009 19:38:58.385756  475149 addons.go:238] Setting addon storage-provisioner=true in "no-preload-678119"
	W1009 19:38:58.385789  475149 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:38:58.385831  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.386818  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.387032  475149 addons.go:69] Setting dashboard=true in profile "no-preload-678119"
	I1009 19:38:58.387076  475149 addons.go:238] Setting addon dashboard=true in "no-preload-678119"
	W1009 19:38:58.387121  475149 addons.go:247] addon dashboard should already be in state true
	I1009 19:38:58.387163  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.387373  475149 addons.go:69] Setting default-storageclass=true in profile "no-preload-678119"
	I1009 19:38:58.387399  475149 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-678119"
	I1009 19:38:58.387676  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.387752  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.391628  475149 out.go:179] * Verifying Kubernetes components...
	I1009 19:38:58.396249  475149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:38:58.438333  475149 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:38:58.441306  475149 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:38:58.444158  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:38:58.444184  475149 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:38:58.444265  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.448911  475149 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:38:58.451904  475149 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:58.451928  475149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:38:58.451993  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.463067  475149 addons.go:238] Setting addon default-storageclass=true in "no-preload-678119"
	W1009 19:38:58.463094  475149 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:38:58.463121  475149 host.go:66] Checking if "no-preload-678119" exists ...
	I1009 19:38:58.463526  475149 cli_runner.go:164] Run: docker container inspect no-preload-678119 --format={{.State.Status}}
	I1009 19:38:58.498859  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:58.520867  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:58.523284  475149 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:58.523308  475149 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:38:58.523377  475149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-678119
	I1009 19:38:58.563152  475149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/no-preload-678119/id_rsa Username:docker}
	I1009 19:38:54.330054  472674 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:38:54.330209  472674 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:38:55.832042  472674 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501826405s
	I1009 19:38:55.840261  472674 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:38:55.840363  472674 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:38:55.840711  472674 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:38:55.840800  472674 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:38:58.863878  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:38:58.891983  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:38:58.892055  475149 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:38:58.998644  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:38:59.001284  475149 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:38:59.021274  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:38:59.021348  475149 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:38:59.188119  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:38:59.188192  475149 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:38:59.378216  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:38:59.378279  475149 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:38:59.465321  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:38:59.465395  475149 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:38:59.514487  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:38:59.514563  475149 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:38:59.558617  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:38:59.558699  475149 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:38:59.588190  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:38:59.588279  475149 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:38:59.618512  475149 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:38:59.618593  475149 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:38:59.643713  475149 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:39:00.610608  472674 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.769277023s
	I1009 19:39:04.343086  472674 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.501851946s
	I1009 19:39:06.343526  472674 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502694801s
	I1009 19:39:06.369876  472674 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:39:06.387294  472674 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:39:06.405800  472674 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:39:06.406305  472674 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-779570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:39:06.423536  472674 kubeadm.go:318] [bootstrap-token] Using token: lmcsj0.9sm8uir04wanmzmq
	I1009 19:39:06.426543  472674 out.go:252]   - Configuring RBAC rules ...
	I1009 19:39:06.426674  472674 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:39:06.436263  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:39:06.447728  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:39:06.452470  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:39:06.460667  472674 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:39:06.465978  472674 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:39:06.754038  472674 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:39:07.255177  472674 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:39:07.755329  472674 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:39:07.756969  472674 kubeadm.go:318] 
	I1009 19:39:07.757057  472674 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:39:07.757082  472674 kubeadm.go:318] 
	I1009 19:39:07.757168  472674 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:39:07.757178  472674 kubeadm.go:318] 
	I1009 19:39:07.757205  472674 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:39:07.757636  472674 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:39:07.757707  472674 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:39:07.757718  472674 kubeadm.go:318] 
	I1009 19:39:07.757775  472674 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:39:07.757784  472674 kubeadm.go:318] 
	I1009 19:39:07.757834  472674 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:39:07.757841  472674 kubeadm.go:318] 
	I1009 19:39:07.757895  472674 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:39:07.757978  472674 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:39:07.758053  472674 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:39:07.758062  472674 kubeadm.go:318] 
	I1009 19:39:07.758357  472674 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:39:07.758448  472674 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:39:07.758458  472674 kubeadm.go:318] 
	I1009 19:39:07.758721  472674 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lmcsj0.9sm8uir04wanmzmq \
	I1009 19:39:07.758838  472674 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:39:07.759027  472674 kubeadm.go:318] 	--control-plane 
	I1009 19:39:07.759041  472674 kubeadm.go:318] 
	I1009 19:39:07.759298  472674 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:39:07.759310  472674 kubeadm.go:318] 
	I1009 19:39:07.759586  472674 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lmcsj0.9sm8uir04wanmzmq \
	I1009 19:39:07.759866  472674 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:39:07.775985  472674 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:39:07.776271  472674 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:39:07.776415  472674 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:39:07.776431  472674 cni.go:84] Creating CNI manager for ""
	I1009 19:39:07.776440  472674 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:39:07.795000  472674 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:39:07.801486  472674 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:39:07.811908  472674 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:39:07.811932  472674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:39:07.836560  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:39:08.499348  472674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:39:08.499480  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:08.499543  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-779570 minikube.k8s.io/updated_at=2025_10_09T19_39_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=embed-certs-779570 minikube.k8s.io/primary=true
	I1009 19:39:08.888724  472674 ops.go:34] apiserver oom_adj: -16
	I1009 19:39:08.888844  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:09.147612  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.283704721s)
	I1009 19:39:09.147678  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.148965085s)
	I1009 19:39:09.147979  475149 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.146620107s)
	I1009 19:39:09.148013  475149 node_ready.go:35] waiting up to 6m0s for node "no-preload-678119" to be "Ready" ...
	I1009 19:39:09.211602  475149 node_ready.go:49] node "no-preload-678119" is "Ready"
	I1009 19:39:09.211632  475149 node_ready.go:38] duration metric: took 63.599366ms for node "no-preload-678119" to be "Ready" ...
	I1009 19:39:09.211646  475149 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:39:09.211706  475149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:39:09.337873  475149 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.6940585s)
	I1009 19:39:09.337951  475149 api_server.go:72] duration metric: took 10.95271973s to wait for apiserver process to appear ...
	I1009 19:39:09.338027  475149 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:39:09.338046  475149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:39:09.341094  475149 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-678119 addons enable metrics-server
	
	I1009 19:39:09.343577  475149 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 19:39:09.389493  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:09.889054  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:10.389575  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:10.889661  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:11.389592  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:11.889704  472674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:39:12.032460  472674 kubeadm.go:1113] duration metric: took 3.533024214s to wait for elevateKubeSystemPrivileges
	I1009 19:39:12.032488  472674 kubeadm.go:402] duration metric: took 25.214702493s to StartCluster
	I1009 19:39:12.032523  472674 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:39:12.032587  472674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:39:12.034016  472674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:39:12.034541  472674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:39:12.034543  472674 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:39:12.034835  472674 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:39:12.034892  472674 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:39:12.034954  472674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-779570"
	I1009 19:39:12.034971  472674 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-779570"
	I1009 19:39:12.034995  472674 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:39:12.035486  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.036060  472674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-779570"
	I1009 19:39:12.036083  472674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-779570"
	I1009 19:39:12.036402  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.038798  472674 out.go:179] * Verifying Kubernetes components...
	I1009 19:39:12.042674  472674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:39:12.077716  472674 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:39:12.081562  472674 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:39:12.081588  472674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:39:12.081672  472674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:39:12.092287  472674 addons.go:238] Setting addon default-storageclass=true in "embed-certs-779570"
	I1009 19:39:12.092333  472674 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:39:12.092764  472674 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:39:12.131640  472674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:39:12.139869  472674 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:39:12.139893  472674 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:39:12.140052  472674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:39:12.171162  472674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:39:12.453116  472674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:39:12.491524  472674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:39:12.491590  472674 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:39:12.523534  472674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:39:13.550205  472674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.097009245s)
	I1009 19:39:13.550231  472674 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.058671599s)
	I1009 19:39:13.550270  472674 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.058665282s)
	I1009 19:39:13.550282  472674 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 19:39:13.551726  472674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:39:13.552388  472674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.028816669s)
	I1009 19:39:13.644696  472674 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:39:09.346473  475149 addons.go:514] duration metric: took 10.96095221s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 19:39:09.361646  475149 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:39:09.361682  475149 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:39:09.838189  475149 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:39:09.846413  475149 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:39:09.847470  475149 api_server.go:141] control plane version: v1.34.1
	I1009 19:39:09.847496  475149 api_server.go:131] duration metric: took 509.460385ms to wait for apiserver health ...
	I1009 19:39:09.847507  475149 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:39:09.851215  475149 system_pods.go:59] 8 kube-system pods found
	I1009 19:39:09.851256  475149 system_pods.go:61] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:09.851265  475149 system_pods.go:61] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:39:09.851302  475149 system_pods.go:61] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:39:09.851312  475149 system_pods.go:61] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:39:09.851324  475149 system_pods.go:61] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:39:09.851329  475149 system_pods.go:61] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:39:09.851336  475149 system_pods.go:61] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:39:09.851344  475149 system_pods.go:61] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:39:09.851351  475149 system_pods.go:74] duration metric: took 3.837394ms to wait for pod list to return data ...
	I1009 19:39:09.851379  475149 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:39:09.854041  475149 default_sa.go:45] found service account: "default"
	I1009 19:39:09.854065  475149 default_sa.go:55] duration metric: took 2.679038ms for default service account to be created ...
	I1009 19:39:09.854076  475149 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:39:09.856968  475149 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:09.857000  475149 system_pods.go:89] "coredns-66bc5c9577-cfmf8" [54b7f29f-4a97-4b36-8523-cada8e102815] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:09.857029  475149 system_pods.go:89] "etcd-no-preload-678119" [bb8fde7c-0813-43ad-b306-15710835ba09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:39:09.857052  475149 system_pods.go:89] "kindnet-rg6kc" [9ae95d69-1114-460b-a01e-1863c278cf3c] Running
	I1009 19:39:09.857059  475149 system_pods.go:89] "kube-apiserver-no-preload-678119" [049b71aa-872d-4001-b9f4-39d29d718a3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:39:09.857066  475149 system_pods.go:89] "kube-controller-manager-no-preload-678119" [7a651126-391c-40fa-8536-43a88fe53045] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:39:09.857076  475149 system_pods.go:89] "kube-proxy-cf6gt" [f0bafa31-2149-4367-9807-708bd7b12e76] Running
	I1009 19:39:09.857083  475149 system_pods.go:89] "kube-scheduler-no-preload-678119" [75790696-4f71-404b-b6c4-b361af303a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:39:09.857091  475149 system_pods.go:89] "storage-provisioner" [6a7f4651-d02b-4b66-a8cb-12a333967e17] Running
	I1009 19:39:09.857113  475149 system_pods.go:126] duration metric: took 3.029943ms to wait for k8s-apps to be running ...
	I1009 19:39:09.857129  475149 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:39:09.857202  475149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:39:09.872843  475149 system_svc.go:56] duration metric: took 15.704951ms WaitForService to wait for kubelet
	I1009 19:39:09.872874  475149 kubeadm.go:586] duration metric: took 11.487705051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:39:09.872892  475149 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:39:09.876212  475149 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:39:09.876243  475149 node_conditions.go:123] node cpu capacity is 2
	I1009 19:39:09.876257  475149 node_conditions.go:105] duration metric: took 3.358365ms to run NodePressure ...
	I1009 19:39:09.876269  475149 start.go:241] waiting for startup goroutines ...
	I1009 19:39:09.876277  475149 start.go:246] waiting for cluster config update ...
	I1009 19:39:09.876288  475149 start.go:255] writing updated cluster config ...
	I1009 19:39:09.876587  475149 ssh_runner.go:195] Run: rm -f paused
	I1009 19:39:09.880841  475149 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:09.884250  475149 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:39:11.889457  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	I1009 19:39:13.647509  472674 addons.go:514] duration metric: took 1.612597289s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:39:14.059255  472674 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-779570" context rescaled to 1 replicas
	W1009 19:39:13.890731  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:16.390611  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:15.557648  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:18.055169  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:18.891032  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:20.892125  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:23.394200  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:20.062765  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:22.555505  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:25.889428  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:27.893240  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:25.054817  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:27.054867  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:30.390232  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:32.890032  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:29.555075  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:32.055287  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:34.056012  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:34.890646  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:37.389399  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:36.554953  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:38.555125  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:39.396416  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:41.890687  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:41.055629  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:43.554918  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:44.389704  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:46.390038  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	W1009 19:39:45.555610  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:48.054785  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:48.889857  475149 pod_ready.go:104] pod "coredns-66bc5c9577-cfmf8" is not "Ready", error: <nil>
	I1009 19:39:49.392625  475149 pod_ready.go:94] pod "coredns-66bc5c9577-cfmf8" is "Ready"
	I1009 19:39:49.392715  475149 pod_ready.go:86] duration metric: took 39.508438777s for pod "coredns-66bc5c9577-cfmf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.396449  475149 pod_ready.go:83] waiting for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.402278  475149 pod_ready.go:94] pod "etcd-no-preload-678119" is "Ready"
	I1009 19:39:49.402304  475149 pod_ready.go:86] duration metric: took 5.826472ms for pod "etcd-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.405956  475149 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.410555  475149 pod_ready.go:94] pod "kube-apiserver-no-preload-678119" is "Ready"
	I1009 19:39:49.410587  475149 pod_ready.go:86] duration metric: took 4.602417ms for pod "kube-apiserver-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.412948  475149 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.588332  475149 pod_ready.go:94] pod "kube-controller-manager-no-preload-678119" is "Ready"
	I1009 19:39:49.588360  475149 pod_ready.go:86] duration metric: took 175.386297ms for pod "kube-controller-manager-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:49.788632  475149 pod_ready.go:83] waiting for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.189292  475149 pod_ready.go:94] pod "kube-proxy-cf6gt" is "Ready"
	I1009 19:39:50.189321  475149 pod_ready.go:86] duration metric: took 400.662881ms for pod "kube-proxy-cf6gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.388545  475149 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.788692  475149 pod_ready.go:94] pod "kube-scheduler-no-preload-678119" is "Ready"
	I1009 19:39:50.788721  475149 pod_ready.go:86] duration metric: took 400.15168ms for pod "kube-scheduler-no-preload-678119" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:50.788735  475149 pod_ready.go:40] duration metric: took 40.907858692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:50.844656  475149 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:39:50.847818  475149 out.go:179] * Done! kubectl is now configured to use "no-preload-678119" cluster and "default" namespace by default
	W1009 19:39:50.054842  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:52.055250  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	W1009 19:39:54.555262  472674 node_ready.go:57] node "embed-certs-779570" has "Ready":"False" status (will retry)
	I1009 19:39:55.556813  472674 node_ready.go:49] node "embed-certs-779570" is "Ready"
	I1009 19:39:55.556840  472674 node_ready.go:38] duration metric: took 42.005077378s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:39:55.556854  472674 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:39:55.556916  472674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:39:55.571118  472674 api_server.go:72] duration metric: took 43.536495654s to wait for apiserver process to appear ...
	I1009 19:39:55.571141  472674 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:39:55.571160  472674 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:39:55.581899  472674 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:39:55.583069  472674 api_server.go:141] control plane version: v1.34.1
	I1009 19:39:55.583099  472674 api_server.go:131] duration metric: took 11.951146ms to wait for apiserver health ...
	I1009 19:39:55.583110  472674 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:39:55.586206  472674 system_pods.go:59] 8 kube-system pods found
	I1009 19:39:55.586242  472674 system_pods.go:61] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.586250  472674 system_pods.go:61] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.586257  472674 system_pods.go:61] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.586262  472674 system_pods.go:61] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.586267  472674 system_pods.go:61] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.586273  472674 system_pods.go:61] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.586277  472674 system_pods.go:61] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.586284  472674 system_pods.go:61] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.586299  472674 system_pods.go:74] duration metric: took 3.182256ms to wait for pod list to return data ...
	I1009 19:39:55.586309  472674 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:39:55.589128  472674 default_sa.go:45] found service account: "default"
	I1009 19:39:55.589156  472674 default_sa.go:55] duration metric: took 2.840943ms for default service account to be created ...
	I1009 19:39:55.589166  472674 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:39:55.593610  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:55.593642  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.593648  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.593655  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.593659  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.593664  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.593668  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.593673  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.593679  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.593704  472674 retry.go:31] will retry after 245.493217ms: missing components: kube-dns
	I1009 19:39:55.844658  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:55.844692  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:55.844699  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:55.844722  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:55.844727  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:55.844732  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:55.844736  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:55.844740  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:55.844746  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:55.844761  472674 retry.go:31] will retry after 270.704249ms: missing components: kube-dns
	I1009 19:39:56.120386  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:56.120421  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:56.120428  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:56.120434  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:56.120439  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:56.120445  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:56.120449  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:56.120453  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:56.120459  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:56.120497  472674 retry.go:31] will retry after 482.359976ms: missing components: kube-dns
	I1009 19:39:56.606422  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:56.606457  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:39:56.606465  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:56.606471  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:56.606475  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:56.606480  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:56.606484  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:56.606489  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:56.606495  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:39:56.606514  472674 retry.go:31] will retry after 538.519972ms: missing components: kube-dns
	I1009 19:39:57.150098  472674 system_pods.go:86] 8 kube-system pods found
	I1009 19:39:57.150205  472674 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running
	I1009 19:39:57.150219  472674 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running
	I1009 19:39:57.150225  472674 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:39:57.150232  472674 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running
	I1009 19:39:57.150242  472674 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running
	I1009 19:39:57.150247  472674 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:39:57.150251  472674 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running
	I1009 19:39:57.150255  472674 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:39:57.150266  472674 system_pods.go:126] duration metric: took 1.56109474s to wait for k8s-apps to be running ...
	I1009 19:39:57.150279  472674 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:39:57.150332  472674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:39:57.168317  472674 system_svc.go:56] duration metric: took 18.028148ms WaitForService to wait for kubelet
	I1009 19:39:57.168348  472674 kubeadm.go:586] duration metric: took 45.133730211s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:39:57.168367  472674 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:39:57.171899  472674 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:39:57.171942  472674 node_conditions.go:123] node cpu capacity is 2
	I1009 19:39:57.171957  472674 node_conditions.go:105] duration metric: took 3.584132ms to run NodePressure ...
	I1009 19:39:57.171969  472674 start.go:241] waiting for startup goroutines ...
	I1009 19:39:57.171977  472674 start.go:246] waiting for cluster config update ...
	I1009 19:39:57.171990  472674 start.go:255] writing updated cluster config ...
	I1009 19:39:57.172347  472674 ssh_runner.go:195] Run: rm -f paused
	I1009 19:39:57.177547  472674 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:57.182493  472674 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.196579  472674 pod_ready.go:94] pod "coredns-66bc5c9577-4c9xb" is "Ready"
	I1009 19:39:57.196608  472674 pod_ready.go:86] duration metric: took 14.085128ms for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.199978  472674 pod_ready.go:83] waiting for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.208562  472674 pod_ready.go:94] pod "etcd-embed-certs-779570" is "Ready"
	I1009 19:39:57.208600  472674 pod_ready.go:86] duration metric: took 8.594816ms for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.211279  472674 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.217066  472674 pod_ready.go:94] pod "kube-apiserver-embed-certs-779570" is "Ready"
	I1009 19:39:57.217103  472674 pod_ready.go:86] duration metric: took 5.798296ms for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.219580  472674 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.582171  472674 pod_ready.go:94] pod "kube-controller-manager-embed-certs-779570" is "Ready"
	I1009 19:39:57.582252  472674 pod_ready.go:86] duration metric: took 362.649708ms for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:57.782320  472674 pod_ready.go:83] waiting for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.181328  472674 pod_ready.go:94] pod "kube-proxy-sp4bk" is "Ready"
	I1009 19:39:58.181359  472674 pod_ready.go:86] duration metric: took 399.01215ms for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.381832  472674 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.781804  472674 pod_ready.go:94] pod "kube-scheduler-embed-certs-779570" is "Ready"
	I1009 19:39:58.781835  472674 pod_ready.go:86] duration metric: took 399.975272ms for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:39:58.781847  472674 pod_ready.go:40] duration metric: took 1.604264696s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:39:58.836096  472674 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:39:58.841462  472674 out.go:179] * Done! kubectl is now configured to use "embed-certs-779570" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:39:55 embed-certs-779570 crio[840]: time="2025-10-09T19:39:55.925699307Z" level=info msg="Created container 468622689fd5a9c83688f08410874c9a02fe664dc67d0e7991c3c58e923e5699: kube-system/coredns-66bc5c9577-4c9xb/coredns" id=6eba69a7-d5f7-454e-8d88-3fee86d8dbba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:39:55 embed-certs-779570 crio[840]: time="2025-10-09T19:39:55.926289542Z" level=info msg="Starting container: 468622689fd5a9c83688f08410874c9a02fe664dc67d0e7991c3c58e923e5699" id=252ed261-de26-4612-b82c-1e8d4d839d66 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:39:55 embed-certs-779570 crio[840]: time="2025-10-09T19:39:55.935185005Z" level=info msg="Started container" PID=1731 containerID=468622689fd5a9c83688f08410874c9a02fe664dc67d0e7991c3c58e923e5699 description=kube-system/coredns-66bc5c9577-4c9xb/coredns id=252ed261-de26-4612-b82c-1e8d4d839d66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39e43e63d67c7138bb127e6a78be1842f460906228279277ec365076027cc68d
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.357740507Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6f21921b-4595-4930-88bd-d252554c0d35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.357826588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.363514523Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56 UID:da851351-fb55-4c75-887c-1b549c0858fd NetNS:/var/run/netns/1409c1c0-e279-47c6-8a0a-1e5fa61f22f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d338}] Aliases:map[]}"
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.363550978Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.375558536Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56 UID:da851351-fb55-4c75-887c-1b549c0858fd NetNS:/var/run/netns/1409c1c0-e279-47c6-8a0a-1e5fa61f22f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d338}] Aliases:map[]}"
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.375707067Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.380904715Z" level=info msg="Ran pod sandbox 787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56 with infra container: default/busybox/POD" id=6f21921b-4595-4930-88bd-d252554c0d35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.382119867Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2a7e980f-ddbd-46d4-9f2a-ac869b5dba50 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.382352486Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2a7e980f-ddbd-46d4-9f2a-ac869b5dba50 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.382393914Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2a7e980f-ddbd-46d4-9f2a-ac869b5dba50 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.386481461Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e0a8dd81-80ed-41d3-b9a8-a60f30f35316 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:39:59 embed-certs-779570 crio[840]: time="2025-10-09T19:39:59.389618827Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.50330236Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e0a8dd81-80ed-41d3-b9a8-a60f30f35316 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.504021096Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dad5402f-a329-496c-bd66-98389cfc8032 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.506002272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b55648e5-d617-4913-aa46-396b8fea962b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.512703298Z" level=info msg="Creating container: default/busybox/busybox" id=b3853297-5610-4733-acc6-de631ec22287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.513665754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.518587928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.519110355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.534968796Z" level=info msg="Created container ddb656ee0c4339ff43bc86df80fe9c3a45c6a42f94f03d6aea98d80d7815623e: default/busybox/busybox" id=b3853297-5610-4733-acc6-de631ec22287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.54168053Z" level=info msg="Starting container: ddb656ee0c4339ff43bc86df80fe9c3a45c6a42f94f03d6aea98d80d7815623e" id=8e58a1bd-adbf-4429-87f7-81bab6ecc162 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:40:01 embed-certs-779570 crio[840]: time="2025-10-09T19:40:01.543879152Z" level=info msg="Started container" PID=1787 containerID=ddb656ee0c4339ff43bc86df80fe9c3a45c6a42f94f03d6aea98d80d7815623e description=default/busybox/busybox id=8e58a1bd-adbf-4429-87f7-81bab6ecc162 name=/runtime.v1.RuntimeService/StartContainer sandboxID=787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ddb656ee0c433       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   787c48ff21815       busybox                                      default
	468622689fd5a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   39e43e63d67c7       coredns-66bc5c9577-4c9xb                     kube-system
	75e9c7e52ed70       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   8aa717ce1db2c       storage-provisioner                          kube-system
	c788e48e9b25d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   ca4efc37f8b9f       kindnet-lgfbl                                kube-system
	52c6e6a78b89e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   67f1253cec93a       kube-proxy-sp4bk                             kube-system
	ac4f8792faa84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   3253e27344946       etcd-embed-certs-779570                      kube-system
	a91c9b826054a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   f39c72af4c31e       kube-apiserver-embed-certs-779570            kube-system
	0fde6b5dac8d2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   191a7a67ef0ca       kube-controller-manager-embed-certs-779570   kube-system
	d2838d4e8c3b5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   73ee13579a23f       kube-scheduler-embed-certs-779570            kube-system
	
	
	==> coredns [468622689fd5a9c83688f08410874c9a02fe664dc67d0e7991c3c58e923e5699] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39212 - 6213 "HINFO IN 4966270754790898547.4226506727015673263. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013822134s
	
	
	==> describe nodes <==
	Name:               embed-certs-779570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-779570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=embed-certs-779570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_39_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-779570
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:40:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:40:08 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:40:08 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:40:08 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:40:08 +0000   Thu, 09 Oct 2025 19:39:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-779570
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c22bd04b163049cc904026fce6842266
	  System UUID:                1e5d6a7e-cdd6-479d-b40f-96791041c4dd
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-4c9xb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-779570                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-lgfbl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-779570             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-779570    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-sp4bk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-779570             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x8 over 74s)  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-779570 event: Registered Node embed-certs-779570 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-779570 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ac4f8792faa845df66430574507ca624bb111e21cf4cba1452675a6ab32e14c1] <==
	{"level":"warn","ts":"2025-10-09T19:39:01.570509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.607617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.626917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.654319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.686921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.723077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.734065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.771560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.790578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.823146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.850257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.886838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.904663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.960129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:01.969014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.007555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.024927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.053455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.118824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.127898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.153983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.184242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.213929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.271720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:39:02.385649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36012","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:40:09 up  2:22,  0 user,  load average: 2.92, 2.85, 2.30
	Linux embed-certs-779570 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c788e48e9b25dd8ee71eb6eca270adc03484df8b674032c5dd3ec61e71bd9746] <==
	I1009 19:39:14.828911       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:39:14.914465       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:39:14.914638       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:39:14.914657       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:39:14.914672       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:39:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:39:15.115672       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:39:15.115773       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:39:15.115812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:39:15.115989       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:39:45.116053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:39:45.119901       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:39:45.120003       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:39:45.119902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 19:39:46.316716       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:39:46.316747       1 metrics.go:72] Registering metrics
	I1009 19:39:46.316800       1 controller.go:711] "Syncing nftables rules"
	I1009 19:39:55.115500       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:39:55.115636       1 main.go:301] handling current node
	I1009 19:40:05.115487       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:40:05.115603       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a91c9b826054a892efb2e98660703ba176baa7503d708b29f8fab2de5894ced9] <==
	I1009 19:39:04.221181       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:39:04.231016       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:39:04.231139       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:39:04.333265       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:39:04.333682       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 19:39:04.416593       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:39:04.424727       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:39:04.489828       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 19:39:04.518920       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:39:04.519002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:39:05.824353       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:39:05.898513       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:39:06.069606       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:39:06.079007       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1009 19:39:06.080266       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:39:06.087514       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:39:06.582162       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:39:07.225426       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:39:07.253808       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:39:07.281274       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:39:11.748068       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:39:12.464528       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:39:12.481305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:39:12.728568       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1009 19:40:07.271298       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:43152: use of closed network connection
	
	
	==> kube-controller-manager [0fde6b5dac8d27e00c6d3c172a1f34fac03d20e624324d4d1e21343a1aaf5b18] <==
	I1009 19:39:11.686534       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 19:39:11.686540       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 19:39:11.694880       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:39:11.698316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 19:39:11.698762       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-779570" podCIDRs=["10.244.0.0/24"]
	I1009 19:39:11.701773       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:39:11.702984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:39:11.703088       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:39:11.703139       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:39:11.703204       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:39:11.704333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:39:11.704403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:39:11.711707       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:39:11.718298       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:39:11.718662       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:39:11.718722       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:39:11.718801       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-779570"
	I1009 19:39:11.718842       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 19:39:11.719514       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:39:11.726385       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:39:11.727642       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:39:11.727663       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:39:11.727671       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:39:11.734265       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:39:56.727303       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [52c6e6a78b89ed27cc451012f9e9bfa1b4e2ea9768744ffb2c094a31380dc0be] <==
	I1009 19:39:14.887149       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:39:14.988038       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:39:15.088807       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:39:15.088908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:39:15.089015       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:39:15.222300       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:39:15.222483       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:39:15.240224       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:39:15.241127       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:39:15.241147       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:39:15.243191       1 config.go:200] "Starting service config controller"
	I1009 19:39:15.243279       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:39:15.243349       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:39:15.243394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:39:15.243436       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:39:15.243490       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:39:15.249377       1 config.go:309] "Starting node config controller"
	I1009 19:39:15.249473       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:39:15.249538       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:39:15.370893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:39:15.371010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:39:15.371160       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d2838d4e8c3b527cd7e3184226d4644fba1be6edca20d95566dfa08f6e22cdb8] <==
	E1009 19:39:04.394337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:39:04.394381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:39:04.394429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:39:04.394472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:39:04.394512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:39:04.394561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:39:04.394608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:39:04.394658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:39:04.394700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:39:04.394745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 19:39:04.394788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:39:04.394830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:39:04.394913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:39:04.394949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:39:04.394993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:39:05.266422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:39:05.317878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:39:05.333554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:39:05.352293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:39:05.368054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:39:05.406927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:39:05.417547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:39:05.439754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:39:05.789488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 19:39:07.632472       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:39:12 embed-certs-779570 kubelet[1312]: I1009 19:39:12.883134    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48274b49-cb31-48c3-96c8-a187d4e6000b-lib-modules\") pod \"kube-proxy-sp4bk\" (UID: \"48274b49-cb31-48c3-96c8-a187d4e6000b\") " pod="kube-system/kube-proxy-sp4bk"
	Oct 09 19:39:12 embed-certs-779570 kubelet[1312]: I1009 19:39:12.883244    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/45264249-abcf-4cfc-b842-d97424fc53be-cni-cfg\") pod \"kindnet-lgfbl\" (UID: \"45264249-abcf-4cfc-b842-d97424fc53be\") " pod="kube-system/kindnet-lgfbl"
	Oct 09 19:39:12 embed-certs-779570 kubelet[1312]: I1009 19:39:12.883383    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45264249-abcf-4cfc-b842-d97424fc53be-lib-modules\") pod \"kindnet-lgfbl\" (UID: \"45264249-abcf-4cfc-b842-d97424fc53be\") " pod="kube-system/kindnet-lgfbl"
	Oct 09 19:39:12 embed-certs-779570 kubelet[1312]: I1009 19:39:12.883613    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48274b49-cb31-48c3-96c8-a187d4e6000b-xtables-lock\") pod \"kube-proxy-sp4bk\" (UID: \"48274b49-cb31-48c3-96c8-a187d4e6000b\") " pod="kube-system/kube-proxy-sp4bk"
	Oct 09 19:39:13 embed-certs-779570 kubelet[1312]: E1009 19:39:13.985366    1312 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:13 embed-certs-779570 kubelet[1312]: E1009 19:39:13.985488    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48274b49-cb31-48c3-96c8-a187d4e6000b-kube-proxy podName:48274b49-cb31-48c3-96c8-a187d4e6000b nodeName:}" failed. No retries permitted until 2025-10-09 19:39:14.485461881 +0000 UTC m=+7.326734538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/48274b49-cb31-48c3-96c8-a187d4e6000b-kube-proxy") pod "kube-proxy-sp4bk" (UID: "48274b49-cb31-48c3-96c8-a187d4e6000b") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: E1009 19:39:14.067717    1312 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: E1009 19:39:14.067769    1312 projected.go:196] Error preparing data for projected volume kube-api-access-vxlj5 for pod kube-system/kindnet-lgfbl: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: E1009 19:39:14.067869    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45264249-abcf-4cfc-b842-d97424fc53be-kube-api-access-vxlj5 podName:45264249-abcf-4cfc-b842-d97424fc53be nodeName:}" failed. No retries permitted until 2025-10-09 19:39:14.567846423 +0000 UTC m=+7.409119080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vxlj5" (UniqueName: "kubernetes.io/projected/45264249-abcf-4cfc-b842-d97424fc53be-kube-api-access-vxlj5") pod "kindnet-lgfbl" (UID: "45264249-abcf-4cfc-b842-d97424fc53be") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: E1009 19:39:14.091556    1312 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: E1009 19:39:14.091606    1312 projected.go:196] Error preparing data for projected volume kube-api-access-7snmc for pod kube-system/kube-proxy-sp4bk: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: E1009 19:39:14.091685    1312 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48274b49-cb31-48c3-96c8-a187d4e6000b-kube-api-access-7snmc podName:48274b49-cb31-48c3-96c8-a187d4e6000b nodeName:}" failed. No retries permitted until 2025-10-09 19:39:14.591663886 +0000 UTC m=+7.432936543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7snmc" (UniqueName: "kubernetes.io/projected/48274b49-cb31-48c3-96c8-a187d4e6000b-kube-api-access-7snmc") pod "kube-proxy-sp4bk" (UID: "48274b49-cb31-48c3-96c8-a187d4e6000b") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: I1009 19:39:14.597355    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:39:14 embed-certs-779570 kubelet[1312]: W1009 19:39:14.690409    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/crio-ca4efc37f8b9f0c5322d5acc30be2fdc57ecb40c79b1b752edbfda24a3613552 WatchSource:0}: Error finding container ca4efc37f8b9f0c5322d5acc30be2fdc57ecb40c79b1b752edbfda24a3613552: Status 404 returned error can't find the container with id ca4efc37f8b9f0c5322d5acc30be2fdc57ecb40c79b1b752edbfda24a3613552
	Oct 09 19:39:15 embed-certs-779570 kubelet[1312]: I1009 19:39:15.701774    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sp4bk" podStartSLOduration=3.701757094 podStartE2EDuration="3.701757094s" podCreationTimestamp="2025-10-09 19:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:39:15.698618235 +0000 UTC m=+8.539890909" watchObservedRunningTime="2025-10-09 19:39:15.701757094 +0000 UTC m=+8.543029751"
	Oct 09 19:39:15 embed-certs-779570 kubelet[1312]: I1009 19:39:15.777618    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lgfbl" podStartSLOduration=3.777601475 podStartE2EDuration="3.777601475s" podCreationTimestamp="2025-10-09 19:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:39:15.745207504 +0000 UTC m=+8.586480169" watchObservedRunningTime="2025-10-09 19:39:15.777601475 +0000 UTC m=+8.618874132"
	Oct 09 19:39:55 embed-certs-779570 kubelet[1312]: I1009 19:39:55.490215    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 09 19:39:55 embed-certs-779570 kubelet[1312]: I1009 19:39:55.593689    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhvz\" (UniqueName: \"kubernetes.io/projected/e2529ef6-950b-4a93-8a58-05ced011aec9-kube-api-access-lqhvz\") pod \"coredns-66bc5c9577-4c9xb\" (UID: \"e2529ef6-950b-4a93-8a58-05ced011aec9\") " pod="kube-system/coredns-66bc5c9577-4c9xb"
	Oct 09 19:39:55 embed-certs-779570 kubelet[1312]: I1009 19:39:55.593903    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2529ef6-950b-4a93-8a58-05ced011aec9-config-volume\") pod \"coredns-66bc5c9577-4c9xb\" (UID: \"e2529ef6-950b-4a93-8a58-05ced011aec9\") " pod="kube-system/coredns-66bc5c9577-4c9xb"
	Oct 09 19:39:55 embed-certs-779570 kubelet[1312]: I1009 19:39:55.694370    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2cdc2193-2900-4e63-a482-40739fe08704-tmp\") pod \"storage-provisioner\" (UID: \"2cdc2193-2900-4e63-a482-40739fe08704\") " pod="kube-system/storage-provisioner"
	Oct 09 19:39:55 embed-certs-779570 kubelet[1312]: I1009 19:39:55.694632    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkbw\" (UniqueName: \"kubernetes.io/projected/2cdc2193-2900-4e63-a482-40739fe08704-kube-api-access-lnkbw\") pod \"storage-provisioner\" (UID: \"2cdc2193-2900-4e63-a482-40739fe08704\") " pod="kube-system/storage-provisioner"
	Oct 09 19:39:56 embed-certs-779570 kubelet[1312]: I1009 19:39:56.795952    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.795931187 podStartE2EDuration="43.795931187s" podCreationTimestamp="2025-10-09 19:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:39:56.780979046 +0000 UTC m=+49.622251703" watchObservedRunningTime="2025-10-09 19:39:56.795931187 +0000 UTC m=+49.637203844"
	Oct 09 19:39:59 embed-certs-779570 kubelet[1312]: I1009 19:39:59.047265    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4c9xb" podStartSLOduration=47.047248572 podStartE2EDuration="47.047248572s" podCreationTimestamp="2025-10-09 19:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:39:56.796326597 +0000 UTC m=+49.637599254" watchObservedRunningTime="2025-10-09 19:39:59.047248572 +0000 UTC m=+51.888521237"
	Oct 09 19:39:59 embed-certs-779570 kubelet[1312]: I1009 19:39:59.218293    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjmhm\" (UniqueName: \"kubernetes.io/projected/da851351-fb55-4c75-887c-1b549c0858fd-kube-api-access-cjmhm\") pod \"busybox\" (UID: \"da851351-fb55-4c75-887c-1b549c0858fd\") " pod="default/busybox"
	Oct 09 19:39:59 embed-certs-779570 kubelet[1312]: W1009 19:39:59.378955    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/crio-787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56 WatchSource:0}: Error finding container 787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56: Status 404 returned error can't find the container with id 787c48ff21815e54d1dfac53f3107ab418a5e9519862b3fe235905403f1e1b56
	
	
	==> storage-provisioner [75e9c7e52ed70ff2a7ef556c0823fc71591e49fd1f4887536ca1c2cb48c72d86] <==
	I1009 19:39:55.923718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:39:55.953576       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:39:55.953714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:39:55.956663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:55.972558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:39:55.985932       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:39:55.991662       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-779570_a709126c-68a2-4f8c-969c-28e0666aea3d!
	I1009 19:39:55.992748       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba6ad425-7ecb-45c9-9bf0-c63c463c7246", APIVersion:"v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-779570_a709126c-68a2-4f8c-969c-28e0666aea3d became leader
	W1009 19:39:55.998986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:56.003979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:39:56.092526       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-779570_a709126c-68a2-4f8c-969c-28e0666aea3d!
	W1009 19:39:58.007759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:39:58.013231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:00.052723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:00.078213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:02.087041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:02.091924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:04.095565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:04.100230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:06.103862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:06.109706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:08.112954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:40:08.120404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-779570 -n embed-certs-779570
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-779570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-779570 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-779570 --alsologtostderr -v=1: exit status 80 (1.679240297s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-779570 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:41:38.035214  484904 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:41:38.035451  484904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:41:38.035481  484904 out.go:374] Setting ErrFile to fd 2...
	I1009 19:41:38.035564  484904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:41:38.035877  484904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:41:38.036273  484904 out.go:368] Setting JSON to false
	I1009 19:41:38.036334  484904 mustload.go:65] Loading cluster: embed-certs-779570
	I1009 19:41:38.036781  484904 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:41:38.037376  484904 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:41:38.057558  484904 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:41:38.057887  484904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:41:38.120875  484904 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:41:38.111071658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:41:38.121569  484904 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-779570 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 19:41:38.124920  484904 out.go:179] * Pausing node embed-certs-779570 ... 
	I1009 19:41:38.127847  484904 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:41:38.128246  484904 ssh_runner.go:195] Run: systemctl --version
	I1009 19:41:38.128304  484904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:41:38.146986  484904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:41:38.252871  484904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:41:38.265446  484904 pause.go:52] kubelet running: true
	I1009 19:41:38.265514  484904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:41:38.517253  484904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:41:38.517345  484904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:41:38.589869  484904 cri.go:89] found id: "fc0c3ef4a639ccbc3d6ed8d36520f8322003481b7ac29e100ae0450982064103"
	I1009 19:41:38.589888  484904 cri.go:89] found id: "e12f4daf9424db3be061381fcf8b34688e94a433a3c0e1bca9a0641e37f02174"
	I1009 19:41:38.589894  484904 cri.go:89] found id: "54d87d820dc1938f8d34bfd342416dac5f2adf821653498270e0a72d6b35d5f4"
	I1009 19:41:38.589898  484904 cri.go:89] found id: "a30da166d076a40602ea5309119d43b6346a615ba7729aabad0cf470c756b482"
	I1009 19:41:38.589901  484904 cri.go:89] found id: "8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3"
	I1009 19:41:38.589905  484904 cri.go:89] found id: "e31682d081500642ab41d785eae95bb338cc60ecad8ebf0b9e2c526d9258fe13"
	I1009 19:41:38.589908  484904 cri.go:89] found id: "17c33253b376c0b387ae7ebe4e58be315318a5622f757e30efdd1a57e6553e7d"
	I1009 19:41:38.589911  484904 cri.go:89] found id: "4be19799344c62649bd6f8d67821e8145a7756b618a2eef2982c64fd4b30a0c8"
	I1009 19:41:38.589915  484904 cri.go:89] found id: "bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df"
	I1009 19:41:38.589921  484904 cri.go:89] found id: "446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	I1009 19:41:38.589924  484904 cri.go:89] found id: "be2d4760bae314d76c51cc9122f7ba323e293e47592bbcb36827264e3fac02c6"
	I1009 19:41:38.589929  484904 cri.go:89] found id: ""
	I1009 19:41:38.589980  484904 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:41:38.606784  484904 retry.go:31] will retry after 194.398654ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:41:38Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:41:38.802281  484904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:41:38.816775  484904 pause.go:52] kubelet running: false
	I1009 19:41:38.816907  484904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:41:39.000473  484904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:41:39.000662  484904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:41:39.078245  484904 cri.go:89] found id: "fc0c3ef4a639ccbc3d6ed8d36520f8322003481b7ac29e100ae0450982064103"
	I1009 19:41:39.078315  484904 cri.go:89] found id: "e12f4daf9424db3be061381fcf8b34688e94a433a3c0e1bca9a0641e37f02174"
	I1009 19:41:39.078336  484904 cri.go:89] found id: "54d87d820dc1938f8d34bfd342416dac5f2adf821653498270e0a72d6b35d5f4"
	I1009 19:41:39.078365  484904 cri.go:89] found id: "a30da166d076a40602ea5309119d43b6346a615ba7729aabad0cf470c756b482"
	I1009 19:41:39.078390  484904 cri.go:89] found id: "8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3"
	I1009 19:41:39.078404  484904 cri.go:89] found id: "e31682d081500642ab41d785eae95bb338cc60ecad8ebf0b9e2c526d9258fe13"
	I1009 19:41:39.078408  484904 cri.go:89] found id: "17c33253b376c0b387ae7ebe4e58be315318a5622f757e30efdd1a57e6553e7d"
	I1009 19:41:39.078411  484904 cri.go:89] found id: "4be19799344c62649bd6f8d67821e8145a7756b618a2eef2982c64fd4b30a0c8"
	I1009 19:41:39.078421  484904 cri.go:89] found id: "bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df"
	I1009 19:41:39.078427  484904 cri.go:89] found id: "446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	I1009 19:41:39.078430  484904 cri.go:89] found id: "be2d4760bae314d76c51cc9122f7ba323e293e47592bbcb36827264e3fac02c6"
	I1009 19:41:39.078433  484904 cri.go:89] found id: ""
	I1009 19:41:39.078484  484904 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:41:39.089753  484904 retry.go:31] will retry after 219.354485ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:41:39Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:41:39.310107  484904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:41:39.323280  484904 pause.go:52] kubelet running: false
	I1009 19:41:39.323345  484904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:41:39.518454  484904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:41:39.518591  484904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:41:39.613541  484904 cri.go:89] found id: "fc0c3ef4a639ccbc3d6ed8d36520f8322003481b7ac29e100ae0450982064103"
	I1009 19:41:39.613562  484904 cri.go:89] found id: "e12f4daf9424db3be061381fcf8b34688e94a433a3c0e1bca9a0641e37f02174"
	I1009 19:41:39.613567  484904 cri.go:89] found id: "54d87d820dc1938f8d34bfd342416dac5f2adf821653498270e0a72d6b35d5f4"
	I1009 19:41:39.613571  484904 cri.go:89] found id: "a30da166d076a40602ea5309119d43b6346a615ba7729aabad0cf470c756b482"
	I1009 19:41:39.613574  484904 cri.go:89] found id: "8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3"
	I1009 19:41:39.613609  484904 cri.go:89] found id: "e31682d081500642ab41d785eae95bb338cc60ecad8ebf0b9e2c526d9258fe13"
	I1009 19:41:39.613613  484904 cri.go:89] found id: "17c33253b376c0b387ae7ebe4e58be315318a5622f757e30efdd1a57e6553e7d"
	I1009 19:41:39.613617  484904 cri.go:89] found id: "4be19799344c62649bd6f8d67821e8145a7756b618a2eef2982c64fd4b30a0c8"
	I1009 19:41:39.613620  484904 cri.go:89] found id: "bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df"
	I1009 19:41:39.613627  484904 cri.go:89] found id: "446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	I1009 19:41:39.613635  484904 cri.go:89] found id: "be2d4760bae314d76c51cc9122f7ba323e293e47592bbcb36827264e3fac02c6"
	I1009 19:41:39.613639  484904 cri.go:89] found id: ""
	I1009 19:41:39.613700  484904 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:41:39.628113  484904 out.go:203] 
	W1009 19:41:39.631110  484904 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:41:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:41:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:41:39.631137  484904 out.go:285] * 
	* 
	W1009 19:41:39.638722  484904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:41:39.643753  484904 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-779570 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-779570
helpers_test.go:243: (dbg) docker inspect embed-certs-779570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0",
	        "Created": "2025-10-09T19:38:39.409674246Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481485,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:40:23.132560298Z",
	            "FinishedAt": "2025-10-09T19:40:22.322025624Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/hosts",
	        "LogPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0-json.log",
	        "Name": "/embed-certs-779570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-779570:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-779570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0",
	                "LowerDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-779570",
	                "Source": "/var/lib/docker/volumes/embed-certs-779570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-779570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-779570",
	                "name.minikube.sigs.k8s.io": "embed-certs-779570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99fe8f30c6f6cceaa4f628cf0b4e79dfc738de51f96ab456abfb7978d191a5de",
	            "SandboxKey": "/var/run/docker/netns/99fe8f30c6f6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-779570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:07:d0:30:47:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28e70e683a9e94690b95b84e3e58ac8af1a42ba0d4f6a915911a12474f440d3d",
	                    "EndpointID": "013c95b0f494fcbb7b51312a66e30a1073195033754f6b3aad9f24e66e01735c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-779570",
	                        "81a5b0bcbd3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570: exit status 2 (355.450889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-779570 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-779570 logs -n 25: (1.542446561s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172       │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ stop    │ -p embed-certs-779570 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:40:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:40:22.864592  481360 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:40:22.864734  481360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:22.864745  481360 out.go:374] Setting ErrFile to fd 2...
	I1009 19:40:22.864751  481360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:22.864998  481360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:40:22.865358  481360 out.go:368] Setting JSON to false
	I1009 19:40:22.866286  481360 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8574,"bootTime":1760030249,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:40:22.866356  481360 start.go:141] virtualization:  
	I1009 19:40:22.869409  481360 out.go:179] * [embed-certs-779570] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:40:22.873551  481360 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:40:22.873596  481360 notify.go:220] Checking for updates...
	I1009 19:40:22.879959  481360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:40:22.882876  481360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:22.885710  481360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:40:22.888632  481360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:40:22.891664  481360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:40:22.894985  481360 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:22.895535  481360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:40:22.923978  481360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:40:22.924130  481360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:22.981565  481360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:40:22.972415642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:40:22.981673  481360 docker.go:318] overlay module found
	I1009 19:40:22.984794  481360 out.go:179] * Using the docker driver based on existing profile
	I1009 19:40:22.987854  481360 start.go:305] selected driver: docker
	I1009 19:40:22.987881  481360 start.go:925] validating driver "docker" against &{Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:22.987988  481360 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:40:22.988725  481360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:23.048380  481360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:40:23.038751526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:40:23.048719  481360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:40:23.048752  481360 cni.go:84] Creating CNI manager for ""
	I1009 19:40:23.048808  481360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:23.048853  481360 start.go:349] cluster config:
	{Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:23.052049  481360 out.go:179] * Starting "embed-certs-779570" primary control-plane node in "embed-certs-779570" cluster
	I1009 19:40:23.054890  481360 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:40:23.057836  481360 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:40:23.060738  481360 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:40:23.060791  481360 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:40:23.060808  481360 cache.go:64] Caching tarball of preloaded images
	I1009 19:40:23.060821  481360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:40:23.060888  481360 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:40:23.060898  481360 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:40:23.061017  481360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/config.json ...
	I1009 19:40:23.080722  481360 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:40:23.080746  481360 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:40:23.080762  481360 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:40:23.080787  481360 start.go:360] acquireMachinesLock for embed-certs-779570: {Name:mk171645357bc6d63c40c917bb88ac3ae25dd14e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:40:23.080868  481360 start.go:364] duration metric: took 56.567µs to acquireMachinesLock for "embed-certs-779570"
	I1009 19:40:23.080893  481360 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:40:23.080901  481360 fix.go:54] fixHost starting: 
	I1009 19:40:23.081165  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:23.098326  481360 fix.go:112] recreateIfNeeded on embed-certs-779570: state=Stopped err=<nil>
	W1009 19:40:23.098355  481360 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:40:19.234262  480157 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-661639:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.470962366s)
	I1009 19:40:19.234293  480157 kic.go:203] duration metric: took 4.471107115s to extract preloaded images to volume ...
	W1009 19:40:19.234436  480157 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:40:19.234566  480157 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:40:19.289012  480157 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-661639 --name default-k8s-diff-port-661639 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-661639 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-661639 --network default-k8s-diff-port-661639 --ip 192.168.76.2 --volume default-k8s-diff-port-661639:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:40:19.561039  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Running}}
	I1009 19:40:19.586144  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:19.610995  480157 cli_runner.go:164] Run: docker exec default-k8s-diff-port-661639 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:40:19.661324  480157 oci.go:144] the created container "default-k8s-diff-port-661639" has a running status.
	I1009 19:40:19.661364  480157 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa...
	I1009 19:40:20.585861  480157 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:40:20.604809  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:20.621258  480157 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:40:20.621281  480157 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-661639 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:40:20.660955  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:20.685339  480157 machine.go:93] provisionDockerMachine start ...
	I1009 19:40:20.685437  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:20.705939  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:20.706326  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:20.706345  480157 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:40:20.707042  480157 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:40:23.877582  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661639
	
	I1009 19:40:23.877605  480157 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-661639"
	I1009 19:40:23.877669  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:23.894632  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:23.894945  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:23.894963  480157 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-661639 && echo "default-k8s-diff-port-661639" | sudo tee /etc/hostname
	I1009 19:40:24.049301  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661639
	
	I1009 19:40:24.049407  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.070603  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:24.070912  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:24.070931  480157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-661639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-661639/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-661639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:40:24.218544  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:40:24.218575  480157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:40:24.218605  480157 ubuntu.go:190] setting up certificates
	I1009 19:40:24.218622  480157 provision.go:84] configureAuth start
	I1009 19:40:24.218689  480157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:40:24.236821  480157 provision.go:143] copyHostCerts
	I1009 19:40:24.236892  480157 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:40:24.236905  480157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:40:24.236987  480157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:40:24.237104  480157 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:40:24.237116  480157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:40:24.237146  480157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:40:24.237215  480157 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:40:24.237226  480157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:40:24.237252  480157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:40:24.237317  480157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-661639 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-661639 localhost minikube]
	I1009 19:40:24.397287  480157 provision.go:177] copyRemoteCerts
	I1009 19:40:24.397364  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:40:24.397407  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.414108  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:24.517918  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:40:24.535684  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 19:40:24.553532  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:40:24.571179  480157 provision.go:87] duration metric: took 352.527985ms to configureAuth
	I1009 19:40:24.571208  480157 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:40:24.571389  480157 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:24.571510  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.588802  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:24.589122  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:24.589144  480157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:40:24.934053  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:40:24.934076  480157 machine.go:96] duration metric: took 4.24871416s to provisionDockerMachine
	I1009 19:40:24.934086  480157 client.go:171] duration metric: took 10.872407548s to LocalClient.Create
	I1009 19:40:24.934100  480157 start.go:167] duration metric: took 10.872481149s to libmachine.API.Create "default-k8s-diff-port-661639"
	I1009 19:40:24.934107  480157 start.go:293] postStartSetup for "default-k8s-diff-port-661639" (driver="docker")
	I1009 19:40:24.934117  480157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:40:24.934209  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:40:24.934263  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.953503  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.064060  480157 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:40:25.067951  480157 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:40:25.068113  480157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:40:25.068150  480157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:40:25.068234  480157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:40:25.068346  480157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:40:25.068469  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:40:25.077438  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:25.099272  480157 start.go:296] duration metric: took 165.148843ms for postStartSetup
	I1009 19:40:25.099739  480157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:40:25.126934  480157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/config.json ...
	I1009 19:40:25.127249  480157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:40:25.127302  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:25.145067  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.248164  480157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:40:25.253169  480157 start.go:128] duration metric: took 11.19519468s to createHost
	I1009 19:40:25.253196  480157 start.go:83] releasing machines lock for "default-k8s-diff-port-661639", held for 11.195327604s
	I1009 19:40:25.253271  480157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:40:25.270448  480157 ssh_runner.go:195] Run: cat /version.json
	I1009 19:40:25.270511  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:25.270792  480157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:40:25.270864  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:25.294426  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.299718  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.393929  480157 ssh_runner.go:195] Run: systemctl --version
	I1009 19:40:25.484710  480157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:40:25.521854  480157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:40:25.526176  480157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:40:25.526311  480157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:40:25.555732  480157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:40:25.555767  480157 start.go:495] detecting cgroup driver to use...
	I1009 19:40:25.555804  480157 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:40:25.555868  480157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:40:25.573792  480157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:40:25.587117  480157 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:40:25.587184  480157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:40:25.605516  480157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:40:25.625768  480157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:40:25.746179  480157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:40:25.877610  480157 docker.go:234] disabling docker service ...
	I1009 19:40:25.877680  480157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:40:25.898629  480157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:40:25.911670  480157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:40:26.023831  480157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:40:26.135074  480157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:40:26.148302  480157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:40:26.165178  480157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:40:26.165246  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.174157  480157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:40:26.174234  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.183237  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.192100  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.200514  480157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:40:26.208426  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.217279  480157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.230965  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.239791  480157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:40:26.247274  480157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:40:26.254824  480157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:26.359364  480157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:40:26.518221  480157 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:40:26.518317  480157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:40:26.523761  480157 start.go:563] Will wait 60s for crictl version
	I1009 19:40:26.523859  480157 ssh_runner.go:195] Run: which crictl
	I1009 19:40:26.529127  480157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:40:26.553888  480157 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:40:26.554056  480157 ssh_runner.go:195] Run: crio --version
	I1009 19:40:26.586608  480157 ssh_runner.go:195] Run: crio --version
	I1009 19:40:26.636967  480157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:40:23.101577  481360 out.go:252] * Restarting existing docker container for "embed-certs-779570" ...
	I1009 19:40:23.101669  481360 cli_runner.go:164] Run: docker start embed-certs-779570
	I1009 19:40:23.355375  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:23.377589  481360 kic.go:430] container "embed-certs-779570" state is running.
	I1009 19:40:23.377983  481360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-779570
	I1009 19:40:23.403303  481360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/config.json ...
	I1009 19:40:23.403553  481360 machine.go:93] provisionDockerMachine start ...
	I1009 19:40:23.403623  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:23.425489  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:23.426040  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:23.426057  481360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:40:23.426833  481360 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:40:26.573913  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-779570
	
	I1009 19:40:26.574002  481360 ubuntu.go:182] provisioning hostname "embed-certs-779570"
	I1009 19:40:26.574095  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:26.599982  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:26.600312  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:26.600324  481360 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-779570 && echo "embed-certs-779570" | sudo tee /etc/hostname
	I1009 19:40:26.768779  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-779570
	
	I1009 19:40:26.768896  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:26.790727  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:26.791035  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:26.791058  481360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-779570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-779570/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-779570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:40:26.946459  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:40:26.946538  481360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:40:26.946582  481360 ubuntu.go:190] setting up certificates
	I1009 19:40:26.946631  481360 provision.go:84] configureAuth start
	I1009 19:40:26.946717  481360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-779570
	I1009 19:40:26.971990  481360 provision.go:143] copyHostCerts
	I1009 19:40:26.972072  481360 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:40:26.972088  481360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:40:26.972172  481360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:40:26.972283  481360 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:40:26.972289  481360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:40:26.972326  481360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:40:26.972385  481360 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:40:26.972396  481360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:40:26.972421  481360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:40:26.972469  481360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.embed-certs-779570 san=[127.0.0.1 192.168.85.2 embed-certs-779570 localhost minikube]
	I1009 19:40:27.456687  481360 provision.go:177] copyRemoteCerts
	I1009 19:40:27.456820  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:40:27.456903  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:27.485385  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:27.588218  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:40:27.611548  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:40:27.633063  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:40:27.654017  481360 provision.go:87] duration metric: took 707.357328ms to configureAuth
	I1009 19:40:27.654068  481360 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:40:27.654289  481360 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:27.654424  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:27.674765  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:27.675087  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:27.675102  481360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:40:26.639747  480157 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-661639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:40:26.667804  480157 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:40:26.672533  480157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:26.683723  480157 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:40:26.683842  480157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:40:26.683908  480157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:26.725690  480157 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:26.725717  480157 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:40:26.725776  480157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:26.753021  480157 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:26.753046  480157 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:40:26.753054  480157 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1009 19:40:26.753151  480157 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-661639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:40:26.753229  480157 ssh_runner.go:195] Run: crio config
	I1009 19:40:26.830632  480157 cni.go:84] Creating CNI manager for ""
	I1009 19:40:26.830657  480157 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:26.830675  480157 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:40:26.830698  480157 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-661639 NodeName:default-k8s-diff-port-661639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:40:26.830820  480157 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-661639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:40:26.830896  480157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:40:26.844397  480157 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:40:26.844472  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:40:26.852961  480157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1009 19:40:26.869624  480157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:40:26.888296  480157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1009 19:40:26.903562  480157 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:40:26.907527  480157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:26.917931  480157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:27.079343  480157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:27.097441  480157 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639 for IP: 192.168.76.2
	I1009 19:40:27.097463  480157 certs.go:195] generating shared ca certs ...
	I1009 19:40:27.097483  480157 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:27.097630  480157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:40:27.097722  480157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:40:27.097735  480157 certs.go:257] generating profile certs ...
	I1009 19:40:27.097793  480157 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key
	I1009 19:40:27.097818  480157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt with IP's: []
	I1009 19:40:28.401196  480157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt ...
	I1009 19:40:28.401273  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: {Name:mk651e23b2facd267582e16e7a4694b152f5962b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:28.401515  480157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key ...
	I1009 19:40:28.401559  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key: {Name:mkc71d476cb6ed1cdf5ce0926d86ca657e7349ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:28.401711  480157 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb
	I1009 19:40:28.401757  480157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:40:28.084248  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:40:28.084272  481360 machine.go:96] duration metric: took 4.680700732s to provisionDockerMachine
	I1009 19:40:28.084285  481360 start.go:293] postStartSetup for "embed-certs-779570" (driver="docker")
	I1009 19:40:28.084297  481360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:40:28.084372  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:40:28.084414  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.124931  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.227076  481360 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:40:28.231138  481360 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:40:28.231209  481360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:40:28.231236  481360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:40:28.231322  481360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:40:28.231448  481360 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:40:28.231593  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:40:28.239755  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:28.260099  481360 start.go:296] duration metric: took 175.798192ms for postStartSetup
	I1009 19:40:28.260219  481360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:40:28.260288  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.279899  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.379330  481360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:40:28.384752  481360 fix.go:56] duration metric: took 5.303843775s for fixHost
	I1009 19:40:28.384778  481360 start.go:83] releasing machines lock for "embed-certs-779570", held for 5.30389596s
	I1009 19:40:28.384854  481360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-779570
	I1009 19:40:28.403719  481360 ssh_runner.go:195] Run: cat /version.json
	I1009 19:40:28.403768  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.403799  481360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:40:28.403852  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.434976  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.466494  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.549986  481360 ssh_runner.go:195] Run: systemctl --version
	I1009 19:40:28.649699  481360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:40:28.728818  481360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:40:28.733537  481360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:40:28.733606  481360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:40:28.743637  481360 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:40:28.743657  481360 start.go:495] detecting cgroup driver to use...
	I1009 19:40:28.743696  481360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:40:28.743746  481360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:40:28.760001  481360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:40:28.774334  481360 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:40:28.774389  481360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:40:28.791061  481360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:40:28.805814  481360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:40:28.956021  481360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:40:29.109476  481360 docker.go:234] disabling docker service ...
	I1009 19:40:29.109541  481360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:40:29.127886  481360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:40:29.142454  481360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:40:29.278990  481360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:40:29.493888  481360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:40:29.511633  481360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:40:29.542091  481360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:40:29.542297  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.553916  481360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:40:29.553997  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.563633  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.573252  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.582705  481360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:40:29.591430  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.600618  481360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.609488  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.618910  481360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:40:29.627618  481360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:40:29.635849  481360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:29.771950  481360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:40:29.954623  481360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:40:29.954745  481360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:40:29.959182  481360 start.go:563] Will wait 60s for crictl version
	I1009 19:40:29.959284  481360 ssh_runner.go:195] Run: which crictl
	I1009 19:40:29.963092  481360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:40:29.993690  481360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:40:29.994118  481360 ssh_runner.go:195] Run: crio --version
	I1009 19:40:30.081891  481360 ssh_runner.go:195] Run: crio --version
	I1009 19:40:30.135647  481360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:40:29.432732  480157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb ...
	I1009 19:40:29.432823  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb: {Name:mkd3bff069ed34901ca13fb8944bdb0bb4f880e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.433062  480157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb ...
	I1009 19:40:29.433101  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb: {Name:mk29fc5943bf7c39c5bcf5094b244b536c1b64b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.433233  480157 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt
	I1009 19:40:29.433358  480157 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key
	I1009 19:40:29.433464  480157 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key
	I1009 19:40:29.433515  480157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt with IP's: []
	I1009 19:40:29.800143  480157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt ...
	I1009 19:40:29.800219  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt: {Name:mk05067dbfdfdcaa6698d49684961bc3a981883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.800484  480157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key ...
	I1009 19:40:29.800531  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key: {Name:mkc12939803757aaa092914e87f0edc14672b1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.801828  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:40:29.801919  480157 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:40:29.801947  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:40:29.801999  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:40:29.802044  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:40:29.802095  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:40:29.802199  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:29.802848  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:40:29.820746  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:40:29.841912  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:40:29.859466  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:40:29.876694  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:40:29.894949  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:40:29.919537  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:40:29.941484  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:40:29.965282  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:40:29.984316  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:40:30.008908  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:40:30.036266  480157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:40:30.053941  480157 ssh_runner.go:195] Run: openssl version
	I1009 19:40:30.062082  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:40:30.073850  480157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:40:30.079506  480157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:40:30.079682  480157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:40:30.139348  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:40:30.160037  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:40:30.178122  480157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.188345  480157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.188410  480157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.250032  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:40:30.259287  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:40:30.268628  480157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:30.273454  480157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:30.273526  480157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:30.315986  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:40:30.325085  480157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:40:30.329380  480157 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:40:30.329435  480157 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:30.329509  480157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:40:30.329564  480157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:40:30.359866  480157 cri.go:89] found id: ""
	I1009 19:40:30.359942  480157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:40:30.370320  480157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:40:30.378698  480157 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:40:30.378768  480157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:40:30.389633  480157 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:40:30.389652  480157 kubeadm.go:157] found existing configuration files:
	
	I1009 19:40:30.389707  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 19:40:30.398971  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:40:30.399044  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:40:30.407216  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 19:40:30.415391  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:40:30.415450  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:40:30.423319  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 19:40:30.432541  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:40:30.432605  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:40:30.440201  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 19:40:30.449318  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:40:30.449391  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:40:30.460285  480157 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:40:30.528529  480157 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:40:30.528864  480157 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:40:30.553257  480157 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:40:30.553336  480157 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:40:30.553380  480157 kubeadm.go:318] OS: Linux
	I1009 19:40:30.553432  480157 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:40:30.553487  480157 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:40:30.553539  480157 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:40:30.553594  480157 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:40:30.553651  480157 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:40:30.553721  480157 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:40:30.553775  480157 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:40:30.553839  480157 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:40:30.553892  480157 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:40:30.654657  480157 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:40:30.654802  480157 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:40:30.654910  480157 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:40:30.678456  480157 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:40:30.138735  481360 cli_runner.go:164] Run: docker network inspect embed-certs-779570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:40:30.172456  481360 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:40:30.176552  481360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:30.192954  481360 kubeadm.go:883] updating cluster {Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:40:30.193089  481360 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:40:30.193146  481360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:30.243029  481360 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:30.243103  481360 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:40:30.243178  481360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:30.278274  481360 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:30.278300  481360 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:40:30.278308  481360 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:40:30.278412  481360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-779570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:40:30.278488  481360 ssh_runner.go:195] Run: crio config
	I1009 19:40:30.344900  481360 cni.go:84] Creating CNI manager for ""
	I1009 19:40:30.344921  481360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:30.344942  481360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:40:30.344965  481360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-779570 NodeName:embed-certs-779570 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:40:30.345093  481360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-779570"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:40:30.345161  481360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:40:30.354547  481360 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:40:30.354610  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:40:30.364947  481360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 19:40:30.382536  481360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:40:30.399313  481360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 19:40:30.414066  481360 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:40:30.418760  481360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:30.429988  481360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:30.569655  481360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:30.590575  481360 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570 for IP: 192.168.85.2
	I1009 19:40:30.590633  481360 certs.go:195] generating shared ca certs ...
	I1009 19:40:30.590674  481360 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:30.590868  481360 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:40:30.590956  481360 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:40:30.590982  481360 certs.go:257] generating profile certs ...
	I1009 19:40:30.591116  481360 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/client.key
	I1009 19:40:30.591223  481360 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/apiserver.key.b138eccb
	I1009 19:40:30.591299  481360 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/proxy-client.key
	I1009 19:40:30.591457  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:40:30.591523  481360 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:40:30.591548  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:40:30.591606  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:40:30.591671  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:40:30.591717  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:40:30.591795  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:30.592663  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:40:30.612801  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:40:30.633012  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:40:30.653463  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:40:30.673732  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 19:40:30.694889  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:40:30.713448  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:40:30.745017  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:40:30.777633  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:40:30.811908  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:40:30.859163  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:40:30.906909  481360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:40:30.927608  481360 ssh_runner.go:195] Run: openssl version
	I1009 19:40:30.946594  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:40:30.955717  481360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.965539  481360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.965648  481360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:40:31.035625  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:40:31.058321  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:40:31.067735  481360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:31.073187  481360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:31.073308  481360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:31.116512  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:40:31.135301  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:40:31.145014  481360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:40:31.151601  481360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:40:31.151740  481360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:40:31.194708  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:40:31.203561  481360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:40:31.207976  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:40:31.251640  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:40:31.293646  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:40:31.336537  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:40:31.380212  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:40:31.439278  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:40:31.541805  481360 kubeadm.go:400] StartCluster: {Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:31.541942  481360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:40:31.542037  481360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:40:31.690601  481360 cri.go:89] found id: "bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df"
	I1009 19:40:31.690683  481360 cri.go:89] found id: ""
	I1009 19:40:31.690772  481360 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:40:31.761236  481360 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:31Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:40:31.761427  481360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:40:31.804955  481360 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:40:31.805033  481360 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:40:31.805147  481360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:40:31.834466  481360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:40:31.835054  481360 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-779570" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:31.835258  481360 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-779570" cluster setting kubeconfig missing "embed-certs-779570" context setting]
	I1009 19:40:31.835652  481360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:31.837550  481360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:40:31.859518  481360 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:40:31.859617  481360 kubeadm.go:601] duration metric: took 54.543706ms to restartPrimaryControlPlane
	I1009 19:40:31.859643  481360 kubeadm.go:402] duration metric: took 317.859868ms to StartCluster
	I1009 19:40:31.859691  481360 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:31.859786  481360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:31.861009  481360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:31.861371  481360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:40:31.861930  481360 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:40:31.862017  481360 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-779570"
	I1009 19:40:31.862032  481360 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-779570"
	W1009 19:40:31.862039  481360 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:40:31.862068  481360 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:40:31.862645  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.863025  481360 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:31.863117  481360 addons.go:69] Setting dashboard=true in profile "embed-certs-779570"
	I1009 19:40:31.863159  481360 addons.go:238] Setting addon dashboard=true in "embed-certs-779570"
	W1009 19:40:31.863193  481360 addons.go:247] addon dashboard should already be in state true
	I1009 19:40:31.863231  481360 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:40:31.863739  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.864321  481360 addons.go:69] Setting default-storageclass=true in profile "embed-certs-779570"
	I1009 19:40:31.864351  481360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-779570"
	I1009 19:40:31.864654  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.869548  481360 out.go:179] * Verifying Kubernetes components...
	I1009 19:40:31.872836  481360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:31.916255  481360 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:40:31.919811  481360 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:40:31.924192  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:40:31.924225  481360 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:40:31.924295  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:31.936221  481360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:40:31.940078  481360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:31.940108  481360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:40:31.940177  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:31.946635  481360 addons.go:238] Setting addon default-storageclass=true in "embed-certs-779570"
	W1009 19:40:31.946659  481360 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:40:31.946683  481360 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:40:31.947202  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.982384  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:32.008294  481360 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:32.008316  481360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:40:32.008392  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:32.011321  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:32.038628  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:32.381528  481360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:32.471921  481360 node_ready.go:35] waiting up to 6m0s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:40:32.504798  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:40:32.504821  481360 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:40:32.534327  481360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:32.535335  481360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:32.591865  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:40:32.591932  481360 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:40:32.742649  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:40:32.742676  481360 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:40:32.815250  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:40:32.815274  481360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:40:30.684080  480157 out.go:252]   - Generating certificates and keys ...
	I1009 19:40:30.684238  480157 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:40:30.684338  480157 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:40:32.374393  480157 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:40:33.018633  480157 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:40:33.168967  480157 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:40:32.987283  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:40:32.987311  481360 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:40:33.083458  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:40:33.083520  481360 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:40:33.117223  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:40:33.117287  481360 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:40:33.148661  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:40:33.148724  481360 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:40:33.168700  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:40:33.168763  481360 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:40:33.195929  481360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:40:34.256501  480157 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:40:35.147801  480157 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:40:35.148455  480157 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-661639 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:40:35.460544  480157 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:40:35.462516  480157 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-661639 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:40:35.639404  480157 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:40:35.834454  480157 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:40:36.230245  480157 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:40:36.230761  480157 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:40:38.144805  480157 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:40:38.669265  480157 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:40:38.788433  480157 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:40:39.080769  480157 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:40:39.929727  480157 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:40:39.931857  480157 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:40:39.934847  480157 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:40:41.123476  481360 node_ready.go:49] node "embed-certs-779570" is "Ready"
	I1009 19:40:41.123560  481360 node_ready.go:38] duration metric: took 8.651604376s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:40:41.123589  481360 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:40:41.123682  481360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:40:41.376431  481360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.842071872s)
	I1009 19:40:39.938308  480157 out.go:252]   - Booting up control plane ...
	I1009 19:40:39.938407  480157 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:40:39.938489  480157 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:40:39.939562  480157 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:40:39.961006  480157 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:40:39.961118  480157 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:40:39.969759  480157 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:40:39.969862  480157 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:40:39.969919  480157 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:40:40.197592  480157 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:40:40.197717  480157 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:40:41.202614  480157 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00169588s
	I1009 19:40:41.202727  480157 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:40:41.202812  480157 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1009 19:40:41.202906  480157 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:40:41.202988  480157 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:40:44.160796  481360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.625432102s)
	I1009 19:40:44.160918  481360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.964913062s)
	I1009 19:40:44.161054  481360 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.037334351s)
	I1009 19:40:44.161074  481360 api_server.go:72] duration metric: took 12.299621864s to wait for apiserver process to appear ...
	I1009 19:40:44.161084  481360 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:40:44.161102  481360 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:40:44.163947  481360 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-779570 addons enable metrics-server
	
	I1009 19:40:44.166785  481360 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1009 19:40:44.169597  481360 addons.go:514] duration metric: took 12.307639181s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1009 19:40:44.175129  481360 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:40:44.176200  481360 api_server.go:141] control plane version: v1.34.1
	I1009 19:40:44.176226  481360 api_server.go:131] duration metric: took 15.134505ms to wait for apiserver health ...
	I1009 19:40:44.176236  481360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:40:44.191806  481360 system_pods.go:59] 8 kube-system pods found
	I1009 19:40:44.191845  481360 system_pods.go:61] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:40:44.191856  481360 system_pods.go:61] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:40:44.191862  481360 system_pods.go:61] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:40:44.191869  481360 system_pods.go:61] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:40:44.191878  481360 system_pods.go:61] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:40:44.191883  481360 system_pods.go:61] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:40:44.191891  481360 system_pods.go:61] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:40:44.191902  481360 system_pods.go:61] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:40:44.191908  481360 system_pods.go:74] duration metric: took 15.666081ms to wait for pod list to return data ...
	I1009 19:40:44.191920  481360 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:40:44.196197  481360 default_sa.go:45] found service account: "default"
	I1009 19:40:44.196222  481360 default_sa.go:55] duration metric: took 4.295886ms for default service account to be created ...
	I1009 19:40:44.196232  481360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:40:44.204360  481360 system_pods.go:86] 8 kube-system pods found
	I1009 19:40:44.204396  481360 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:40:44.204405  481360 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:40:44.204412  481360 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:40:44.204418  481360 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:40:44.204425  481360 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:40:44.204430  481360 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:40:44.204437  481360 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:40:44.204441  481360 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:40:44.204449  481360 system_pods.go:126] duration metric: took 8.211395ms to wait for k8s-apps to be running ...
	I1009 19:40:44.204464  481360 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:40:44.204532  481360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:40:44.228761  481360 system_svc.go:56] duration metric: took 24.288582ms WaitForService to wait for kubelet
	I1009 19:40:44.228789  481360 kubeadm.go:586] duration metric: took 12.367334696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:40:44.228808  481360 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:40:44.236597  481360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:40:44.236627  481360 node_conditions.go:123] node cpu capacity is 2
	I1009 19:40:44.236641  481360 node_conditions.go:105] duration metric: took 7.82689ms to run NodePressure ...
	I1009 19:40:44.236654  481360 start.go:241] waiting for startup goroutines ...
	I1009 19:40:44.236661  481360 start.go:246] waiting for cluster config update ...
	I1009 19:40:44.236674  481360 start.go:255] writing updated cluster config ...
	I1009 19:40:44.236953  481360 ssh_runner.go:195] Run: rm -f paused
	I1009 19:40:44.247976  481360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:40:44.251689  481360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:40:46.300706  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:40:46.028190  480157 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.824171766s
	I1009 19:40:49.493047  480157 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.28961737s
	I1009 19:40:51.706655  480157 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.503011829s
	I1009 19:40:51.731847  480157 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:40:51.747214  480157 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:40:51.766422  480157 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:40:51.766932  480157 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-661639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:40:51.783988  480157 kubeadm.go:318] [bootstrap-token] Using token: is484r.azrjmlmvdylfauu1
	W1009 19:40:48.757473  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:40:50.757819  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:40:52.759128  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:40:51.787113  480157 out.go:252]   - Configuring RBAC rules ...
	I1009 19:40:51.787241  480157 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:40:51.797741  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:40:51.822020  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:40:51.834255  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:40:51.839719  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:40:51.856528  480157 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:40:52.121496  480157 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:40:52.616162  480157 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:40:53.118371  480157 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:40:53.120064  480157 kubeadm.go:318] 
	I1009 19:40:53.120147  480157 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:40:53.120158  480157 kubeadm.go:318] 
	I1009 19:40:53.120247  480157 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:40:53.120257  480157 kubeadm.go:318] 
	I1009 19:40:53.120285  480157 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:40:53.120352  480157 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:40:53.120410  480157 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:40:53.120419  480157 kubeadm.go:318] 
	I1009 19:40:53.120476  480157 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:40:53.120485  480157 kubeadm.go:318] 
	I1009 19:40:53.120536  480157 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:40:53.120544  480157 kubeadm.go:318] 
	I1009 19:40:53.120600  480157 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:40:53.120683  480157 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:40:53.120759  480157 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:40:53.120768  480157 kubeadm.go:318] 
	I1009 19:40:53.120864  480157 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:40:53.120950  480157 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:40:53.120977  480157 kubeadm.go:318] 
	I1009 19:40:53.121071  480157 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token is484r.azrjmlmvdylfauu1 \
	I1009 19:40:53.121207  480157 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:40:53.121237  480157 kubeadm.go:318] 	--control-plane 
	I1009 19:40:53.121247  480157 kubeadm.go:318] 
	I1009 19:40:53.121337  480157 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:40:53.121345  480157 kubeadm.go:318] 
	I1009 19:40:53.121432  480157 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token is484r.azrjmlmvdylfauu1 \
	I1009 19:40:53.121543  480157 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:40:53.125569  480157 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:40:53.125845  480157 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:40:53.125992  480157 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:40:53.126027  480157 cni.go:84] Creating CNI manager for ""
	I1009 19:40:53.126036  480157 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:53.129471  480157 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:40:53.132408  480157 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:40:53.137336  480157 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:40:53.137356  480157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:40:53.172401  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:40:53.661103  480157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:40:53.661254  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:53.661330  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-661639 minikube.k8s.io/updated_at=2025_10_09T19_40_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=default-k8s-diff-port-661639 minikube.k8s.io/primary=true
	W1009 19:40:55.260263  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:40:57.760178  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:40:54.041422  480157 ops.go:34] apiserver oom_adj: -16
	I1009 19:40:54.041536  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:54.541626  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:55.042091  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:55.542219  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:56.041697  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:56.541744  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:57.042490  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:57.542097  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:58.042397  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:58.315779  480157 kubeadm.go:1113] duration metric: took 4.654576231s to wait for elevateKubeSystemPrivileges
	I1009 19:40:58.315809  480157 kubeadm.go:402] duration metric: took 27.986377566s to StartCluster
	I1009 19:40:58.315828  480157 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:58.315893  480157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:58.317530  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:58.317778  480157 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:40:58.318010  480157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:40:58.318178  480157 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:40:58.318268  480157 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-661639"
	I1009 19:40:58.318293  480157 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-661639"
	I1009 19:40:58.318319  480157 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:40:58.318432  480157 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:58.318497  480157 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-661639"
	I1009 19:40:58.318538  480157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-661639"
	I1009 19:40:58.318855  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:58.318857  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:58.323183  480157 out.go:179] * Verifying Kubernetes components...
	I1009 19:40:58.330097  480157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:58.356060  480157 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:40:58.357702  480157 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-661639"
	I1009 19:40:58.357740  480157 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:40:58.359210  480157 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:58.359229  480157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:40:58.359303  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:58.359524  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:58.402797  480157 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:58.402818  480157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:40:58.402887  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:58.404759  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:58.431744  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:58.864248  480157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:59.143505  480157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:59.196278  480157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:40:59.196401  480157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:41:00.400024  480157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256480726s)
	I1009 19:41:00.400303  480157 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203879405s)
	I1009 19:41:00.400622  480157 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.204313191s)
	I1009 19:41:00.400651  480157 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 19:41:00.403605  480157 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1009 19:41:00.270275  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:02.757932  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:41:00.404404  480157 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-661639" to be "Ready" ...
	I1009 19:41:00.406776  480157 addons.go:514] duration metric: took 2.088575741s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 19:41:00.905194  480157 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-661639" context rescaled to 1 replicas
	W1009 19:41:02.407242  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:05.256792  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:07.257345  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:04.407551  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:06.407805  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:09.757830  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:12.256944  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:08.908367  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:11.407169  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:13.407428  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:14.256997  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:16.257126  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:15.407699  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:17.907270  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:18.757445  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:20.757661  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:20.407810  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:22.407855  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:23.257314  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:41:24.757768  481360 pod_ready.go:94] pod "coredns-66bc5c9577-4c9xb" is "Ready"
	I1009 19:41:24.757799  481360 pod_ready.go:86] duration metric: took 40.506085316s for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.760533  481360 pod_ready.go:83] waiting for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.765081  481360 pod_ready.go:94] pod "etcd-embed-certs-779570" is "Ready"
	I1009 19:41:24.765103  481360 pod_ready.go:86] duration metric: took 4.545228ms for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.767501  481360 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.772481  481360 pod_ready.go:94] pod "kube-apiserver-embed-certs-779570" is "Ready"
	I1009 19:41:24.772560  481360 pod_ready.go:86] duration metric: took 5.031042ms for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.775062  481360 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.956480  481360 pod_ready.go:94] pod "kube-controller-manager-embed-certs-779570" is "Ready"
	I1009 19:41:24.956507  481360 pod_ready.go:86] duration metric: took 181.416344ms for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:25.156320  481360 pod_ready.go:83] waiting for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:25.555664  481360 pod_ready.go:94] pod "kube-proxy-sp4bk" is "Ready"
	I1009 19:41:25.555690  481360 pod_ready.go:86] duration metric: took 399.339973ms for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:25.755977  481360 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:26.156611  481360 pod_ready.go:94] pod "kube-scheduler-embed-certs-779570" is "Ready"
	I1009 19:41:26.156639  481360 pod_ready.go:86] duration metric: took 400.633273ms for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:26.156652  481360 pod_ready.go:40] duration metric: took 41.908634821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:41:26.212305  481360 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:41:26.215550  481360 out.go:179] * Done! kubectl is now configured to use "embed-certs-779570" cluster and "default" namespace by default
	W1009 19:41:24.407937  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:26.907333  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:28.907997  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:31.407972  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:33.908001  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:36.407810  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:38.408430  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.019442235Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e140e878-1a2f-443e-b567-c165976647e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.02227801Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9aee8573-fa24-4ca5-aec0-97a586632629 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.024033469Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper" id=966b3784-910f-4e6c-a79a-4ab2a947a95d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.024337891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.038352155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.039266159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.057280815Z" level=info msg="Created container 446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper" id=966b3784-910f-4e6c-a79a-4ab2a947a95d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.058027064Z" level=info msg="Starting container: 446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f" id=ea850075-c2a0-46f8-87ac-76a3ed07350f name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.061913223Z" level=info msg="Started container" PID=1640 containerID=446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper id=ea850075-c2a0-46f8-87ac-76a3ed07350f name=/runtime.v1.RuntimeService/StartContainer sandboxID=b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211
	Oct 09 19:41:21 embed-certs-779570 conmon[1638]: conmon 446b9977cb579388c116 <ninfo>: container 1640 exited with status 1
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.41742994Z" level=info msg="Removing container: f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241" id=d5932865-955c-44dc-8645-6487946ff858 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.428135076Z" level=info msg="Error loading conmon cgroup of container f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241: cgroup deleted" id=d5932865-955c-44dc-8645-6487946ff858 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.431252488Z" level=info msg="Removed container f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper" id=d5932865-955c-44dc-8645-6487946ff858 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.523222849Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.527616058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.527653039Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.52767539Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.531185143Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.531222714Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.531244606Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.534606098Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.534638493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.534662362Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.538643652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.538681142Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	446b9977cb579       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   b73f8f15a6a96       dashboard-metrics-scraper-6ffb444bf9-8ghrj   kubernetes-dashboard
	fc0c3ef4a639c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   9ae854d822fbc       storage-provisioner                          kube-system
	be2d4760bae31       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   8cf59713c1912       kubernetes-dashboard-855c9754f9-dm67w        kubernetes-dashboard
	1d138f9143886       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   6aa7c5f6ae46c       busybox                                      default
	e12f4daf9424d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   5d1239ff91c87       kindnet-lgfbl                                kube-system
	54d87d820dc19       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   a88fe5b829d0a       coredns-66bc5c9577-4c9xb                     kube-system
	a30da166d076a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   fb50dd6d1300b       kube-proxy-sp4bk                             kube-system
	8eab2000543c1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   9ae854d822fbc       storage-provisioner                          kube-system
	e31682d081500       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   448b044326a74       kube-scheduler-embed-certs-779570            kube-system
	17c33253b376c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   51b1971357133       etcd-embed-certs-779570                      kube-system
	4be19799344c6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2bf93d519cbdb       kube-controller-manager-embed-certs-779570   kube-system
	bd786fb308618       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   826c363f38f07       kube-apiserver-embed-certs-779570            kube-system
	
	
	==> coredns [54d87d820dc1938f8d34bfd342416dac5f2adf821653498270e0a72d6b35d5f4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34104 - 44765 "HINFO IN 5734484300843791744.4217364063665467189. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010769866s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-779570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-779570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=embed-certs-779570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_39_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-779570
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:41:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:39:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-779570
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 14d3a0aafc5647aa9ddf97cb58a3e9e0
	  System UUID:                1e5d6a7e-cdd6-479d-b40f-96791041c4dd
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-4c9xb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m29s
	  kube-system                 etcd-embed-certs-779570                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m34s
	  kube-system                 kindnet-lgfbl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m29s
	  kube-system                 kube-apiserver-embed-certs-779570             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-779570    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-sp4bk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-scheduler-embed-certs-779570             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8ghrj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dm67w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m25s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m34s                  kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s                  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m34s                  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m30s                  node-controller  Node embed-certs-779570 event: Registered Node embed-certs-779570 in Controller
	  Normal   NodeReady                106s                   kubelet          Node embed-certs-779570 status is now: NodeReady
	  Normal   Starting                 71s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-779570 event: Registered Node embed-certs-779570 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [17c33253b376c0b387ae7ebe4e58be315318a5622f757e30efdd1a57e6553e7d] <==
	{"level":"warn","ts":"2025-10-09T19:40:39.057157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.108176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.143162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.178341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.222340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.279635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.324020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.346901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.370871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.403765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.421595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.466244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.479181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.528544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.568505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.598604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.630492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.646344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.691640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.714563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.748324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.781495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.799076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.826181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.904882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:41 up  2:24,  0 user,  load average: 2.87, 3.04, 2.43
	Linux embed-certs-779570 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e12f4daf9424db3be061381fcf8b34688e94a433a3c0e1bca9a0641e37f02174] <==
	I1009 19:40:43.312534       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:40:43.312760       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:40:43.312891       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:40:43.312903       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:40:43.312913       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:40:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:40:43.527419       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:40:43.527436       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:40:43.527444       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:40:43.527555       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:41:13.527665       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:41:13.527665       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:41:13.527764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:41:13.527821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 19:41:14.927915       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:41:14.927957       1 metrics.go:72] Registering metrics
	I1009 19:41:14.928057       1 controller.go:711] "Syncing nftables rules"
	I1009 19:41:23.522890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:41:23.522943       1 main.go:301] handling current node
	I1009 19:41:33.529165       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:41:33.529204       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df] <==
	I1009 19:40:41.220655       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:40:41.231751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:40:41.245171       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:40:41.250157       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:40:41.281136       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:40:41.292508       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:40:41.292556       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:40:41.292758       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:40:41.292908       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:40:41.292977       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 19:40:41.293067       1 aggregator.go:171] initial CRD sync complete...
	I1009 19:40:41.293077       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:40:41.293081       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:40:41.293086       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:40:41.863967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:40:42.073217       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:40:43.575315       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:40:43.744674       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:40:43.852260       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:40:43.870523       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:40:44.040475       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.49.130"}
	I1009 19:40:44.076186       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.156.159"}
	I1009 19:40:45.879457       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:40:46.161164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:40:46.243044       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4be19799344c62649bd6f8d67821e8145a7756b618a2eef2982c64fd4b30a0c8] <==
	I1009 19:40:45.619304       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:40:45.619356       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:40:45.619610       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:40:45.621322       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:40:45.634222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:40:45.654202       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:40:45.654204       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 19:40:45.654223       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:40:45.678256       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:40:45.678298       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:40:45.678334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:40:45.678373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:40:45.678408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:40:45.678435       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:40:45.678645       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:40:45.679475       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:40:45.680937       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:40:45.680971       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:40:45.682385       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:40:45.693982       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:40:45.702535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:40:45.758731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:40:45.758763       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:40:45.758777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:40:45.782512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a30da166d076a40602ea5309119d43b6346a615ba7729aabad0cf470c756b482] <==
	I1009 19:40:43.561323       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:40:43.739026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:40:43.877646       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:40:43.879698       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:40:43.879790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:40:44.063917       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:40:44.063971       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:40:44.085871       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:40:44.093019       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:40:44.093050       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:40:44.106465       1 config.go:200] "Starting service config controller"
	I1009 19:40:44.106483       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:40:44.106518       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:40:44.106522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:40:44.106534       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:40:44.106538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:40:44.107175       1 config.go:309] "Starting node config controller"
	I1009 19:40:44.107184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:40:44.107191       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:40:44.207362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:40:44.207460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:40:44.207486       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e31682d081500642ab41d785eae95bb338cc60ecad8ebf0b9e2c526d9258fe13] <==
	I1009 19:40:38.524402       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:40:41.351613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:40:41.351654       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:40:41.371103       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:40:41.371210       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:40:41.371236       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:40:41.371260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:40:41.373371       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:40:41.373402       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:40:41.373423       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:40:41.373429       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:40:41.482837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:40:41.483264       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:40:41.483343       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.326478     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsg7h\" (UniqueName: \"kubernetes.io/projected/2d517ad9-c456-40c1-ae85-a137f48a5f5e-kube-api-access-dsg7h\") pod \"kubernetes-dashboard-855c9754f9-dm67w\" (UID: \"2d517ad9-c456-40c1-ae85-a137f48a5f5e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dm67w"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.329030     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m9rd\" (UniqueName: \"kubernetes.io/projected/d3fa6474-3aea-4647-928f-bd921690d575-kube-api-access-6m9rd\") pod \"dashboard-metrics-scraper-6ffb444bf9-8ghrj\" (UID: \"d3fa6474-3aea-4647-928f-bd921690d575\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.329236     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2d517ad9-c456-40c1-ae85-a137f48a5f5e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dm67w\" (UID: \"2d517ad9-c456-40c1-ae85-a137f48a5f5e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dm67w"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.329343     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6474-3aea-4647-928f-bd921690d575-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8ghrj\" (UID: \"d3fa6474-3aea-4647-928f-bd921690d575\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: W1009 19:40:46.538292     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/crio-8cf59713c1912d2e6a9b209eee9c3b1965eacbdc9baef200dcae3738e69a7744 WatchSource:0}: Error finding container 8cf59713c1912d2e6a9b209eee9c3b1965eacbdc9baef200dcae3738e69a7744: Status 404 returned error can't find the container with id 8cf59713c1912d2e6a9b209eee9c3b1965eacbdc9baef200dcae3738e69a7744
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: W1009 19:40:46.569778     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/crio-b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211 WatchSource:0}: Error finding container b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211: Status 404 returned error can't find the container with id b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211
	Oct 09 19:40:53 embed-certs-779570 kubelet[779]: I1009 19:40:53.338535     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dm67w" podStartSLOduration=0.696959997 podStartE2EDuration="7.338518365s" podCreationTimestamp="2025-10-09 19:40:46 +0000 UTC" firstStartedPulling="2025-10-09 19:40:46.541897346 +0000 UTC m=+15.952288492" lastFinishedPulling="2025-10-09 19:40:53.183455632 +0000 UTC m=+22.593846860" observedRunningTime="2025-10-09 19:40:53.335162871 +0000 UTC m=+22.745554034" watchObservedRunningTime="2025-10-09 19:40:53.338518365 +0000 UTC m=+22.748909511"
	Oct 09 19:41:00 embed-certs-779570 kubelet[779]: I1009 19:41:00.355793     779 scope.go:117] "RemoveContainer" containerID="0260b5189e2ff6224db807004f4be76c28292e028821d6f93baa07f451d2d990"
	Oct 09 19:41:01 embed-certs-779570 kubelet[779]: I1009 19:41:01.354600     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:01 embed-certs-779570 kubelet[779]: E1009 19:41:01.354943     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:01 embed-certs-779570 kubelet[779]: I1009 19:41:01.355117     779 scope.go:117] "RemoveContainer" containerID="0260b5189e2ff6224db807004f4be76c28292e028821d6f93baa07f451d2d990"
	Oct 09 19:41:02 embed-certs-779570 kubelet[779]: I1009 19:41:02.358846     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:02 embed-certs-779570 kubelet[779]: E1009 19:41:02.359005     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:06 embed-certs-779570 kubelet[779]: I1009 19:41:06.499050     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:06 embed-certs-779570 kubelet[779]: E1009 19:41:06.499231     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:13 embed-certs-779570 kubelet[779]: I1009 19:41:13.385070     779 scope.go:117] "RemoveContainer" containerID="8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: I1009 19:41:21.018426     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: I1009 19:41:21.410630     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: I1009 19:41:21.410991     779 scope.go:117] "RemoveContainer" containerID="446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: E1009 19:41:21.411200     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:26 embed-certs-779570 kubelet[779]: I1009 19:41:26.505673     779 scope.go:117] "RemoveContainer" containerID="446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	Oct 09 19:41:26 embed-certs-779570 kubelet[779]: E1009 19:41:26.505894     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:38 embed-certs-779570 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:41:38 embed-certs-779570 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:41:38 embed-certs-779570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [be2d4760bae314d76c51cc9122f7ba323e293e47592bbcb36827264e3fac02c6] <==
	2025/10/09 19:40:53 Starting overwatch
	2025/10/09 19:40:53 Using namespace: kubernetes-dashboard
	2025/10/09 19:40:53 Using in-cluster config to connect to apiserver
	2025/10/09 19:40:53 Using secret token for csrf signing
	2025/10/09 19:40:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:40:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:40:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 19:40:53 Generating JWE encryption key
	2025/10/09 19:40:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:40:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:40:55 Initializing JWE encryption key from synchronized object
	2025/10/09 19:40:55 Creating in-cluster Sidecar client
	2025/10/09 19:40:55 Serving insecurely on HTTP port: 9090
	2025/10/09 19:40:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:41:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3] <==
	I1009 19:40:42.739220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:41:12.740570       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fc0c3ef4a639ccbc3d6ed8d36520f8322003481b7ac29e100ae0450982064103] <==
	I1009 19:41:13.461316       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:41:13.461373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:41:13.465221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:16.920554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:21.181307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:24.779813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:27.833997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:30.855697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:30.860659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:41:30.860923       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:41:30.861099       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-779570_346f8794-c177-449a-ad4d-c6ab0738ea79!
	I1009 19:41:30.861344       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba6ad425-7ecb-45c9-9bf0-c63c463c7246", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-779570_346f8794-c177-449a-ad4d-c6ab0738ea79 became leader
	W1009 19:41:30.864917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:30.873423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:41:30.961961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-779570_346f8794-c177-449a-ad4d-c6ab0738ea79!
	W1009 19:41:32.876416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:32.881144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:34.884364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:34.891177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:36.894681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:36.900353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:38.903568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:38.911110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:40.914969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:40.923408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-779570 -n embed-certs-779570
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-779570 -n embed-certs-779570: exit status 2 (365.426651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-779570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-779570
helpers_test.go:243: (dbg) docker inspect embed-certs-779570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0",
	        "Created": "2025-10-09T19:38:39.409674246Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481485,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:40:23.132560298Z",
	            "FinishedAt": "2025-10-09T19:40:22.322025624Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/hosts",
	        "LogPath": "/var/lib/docker/containers/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0-json.log",
	        "Name": "/embed-certs-779570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-779570:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-779570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0",
	                "LowerDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1b93952cf7afd4d1fdc93e7074b7154c31a4c156d0a37c3fb6baf9c7660fb0d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-779570",
	                "Source": "/var/lib/docker/volumes/embed-certs-779570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-779570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-779570",
	                "name.minikube.sigs.k8s.io": "embed-certs-779570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99fe8f30c6f6cceaa4f628cf0b4e79dfc738de51f96ab456abfb7978d191a5de",
	            "SandboxKey": "/var/run/docker/netns/99fe8f30c6f6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-779570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:07:d0:30:47:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "28e70e683a9e94690b95b84e3e58ac8af1a42ba0d4f6a915911a12474f440d3d",
	                    "EndpointID": "013c95b0f494fcbb7b51312a66e30a1073195033754f6b3aad9f24e66e01735c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-779570",
	                        "81a5b0bcbd3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570: exit status 2 (382.314001ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-779570 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-779570 logs -n 25: (1.455642093s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-259172                                                                                                                                                                                                                     │ cert-expiration-259172       │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:37 UTC │
	│ start   │ -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:37 UTC │ 09 Oct 25 19:38 UTC │
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ stop    │ -p embed-certs-779570 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:40:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:40:22.864592  481360 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:40:22.864734  481360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:22.864745  481360 out.go:374] Setting ErrFile to fd 2...
	I1009 19:40:22.864751  481360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:22.864998  481360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:40:22.865358  481360 out.go:368] Setting JSON to false
	I1009 19:40:22.866286  481360 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8574,"bootTime":1760030249,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:40:22.866356  481360 start.go:141] virtualization:  
	I1009 19:40:22.869409  481360 out.go:179] * [embed-certs-779570] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:40:22.873551  481360 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:40:22.873596  481360 notify.go:220] Checking for updates...
	I1009 19:40:22.879959  481360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:40:22.882876  481360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:22.885710  481360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:40:22.888632  481360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:40:22.891664  481360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:40:22.894985  481360 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:22.895535  481360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:40:22.923978  481360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:40:22.924130  481360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:22.981565  481360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:40:22.972415642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:40:22.981673  481360 docker.go:318] overlay module found
	I1009 19:40:22.984794  481360 out.go:179] * Using the docker driver based on existing profile
	I1009 19:40:22.987854  481360 start.go:305] selected driver: docker
	I1009 19:40:22.987881  481360 start.go:925] validating driver "docker" against &{Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:22.987988  481360 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:40:22.988725  481360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:23.048380  481360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:40:23.038751526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:40:23.048719  481360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:40:23.048752  481360 cni.go:84] Creating CNI manager for ""
	I1009 19:40:23.048808  481360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:23.048853  481360 start.go:349] cluster config:
	{Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:23.052049  481360 out.go:179] * Starting "embed-certs-779570" primary control-plane node in "embed-certs-779570" cluster
	I1009 19:40:23.054890  481360 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:40:23.057836  481360 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:40:23.060738  481360 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:40:23.060791  481360 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:40:23.060808  481360 cache.go:64] Caching tarball of preloaded images
	I1009 19:40:23.060821  481360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:40:23.060888  481360 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:40:23.060898  481360 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:40:23.061017  481360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/config.json ...
	I1009 19:40:23.080722  481360 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:40:23.080746  481360 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:40:23.080762  481360 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:40:23.080787  481360 start.go:360] acquireMachinesLock for embed-certs-779570: {Name:mk171645357bc6d63c40c917bb88ac3ae25dd14e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:40:23.080868  481360 start.go:364] duration metric: took 56.567µs to acquireMachinesLock for "embed-certs-779570"
	I1009 19:40:23.080893  481360 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:40:23.080901  481360 fix.go:54] fixHost starting: 
	I1009 19:40:23.081165  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:23.098326  481360 fix.go:112] recreateIfNeeded on embed-certs-779570: state=Stopped err=<nil>
	W1009 19:40:23.098355  481360 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:40:19.234262  480157 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-661639:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.470962366s)
	I1009 19:40:19.234293  480157 kic.go:203] duration metric: took 4.471107115s to extract preloaded images to volume ...
	W1009 19:40:19.234436  480157 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:40:19.234566  480157 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:40:19.289012  480157 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-661639 --name default-k8s-diff-port-661639 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-661639 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-661639 --network default-k8s-diff-port-661639 --ip 192.168.76.2 --volume default-k8s-diff-port-661639:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:40:19.561039  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Running}}
	I1009 19:40:19.586144  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:19.610995  480157 cli_runner.go:164] Run: docker exec default-k8s-diff-port-661639 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:40:19.661324  480157 oci.go:144] the created container "default-k8s-diff-port-661639" has a running status.
	I1009 19:40:19.661364  480157 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa...
	I1009 19:40:20.585861  480157 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:40:20.604809  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:20.621258  480157 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:40:20.621281  480157 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-661639 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:40:20.660955  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:20.685339  480157 machine.go:93] provisionDockerMachine start ...
	I1009 19:40:20.685437  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:20.705939  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:20.706326  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:20.706345  480157 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:40:20.707042  480157 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:40:23.877582  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661639
	
	I1009 19:40:23.877605  480157 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-661639"
	I1009 19:40:23.877669  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:23.894632  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:23.894945  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:23.894963  480157 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-661639 && echo "default-k8s-diff-port-661639" | sudo tee /etc/hostname
	I1009 19:40:24.049301  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661639
	
	I1009 19:40:24.049407  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.070603  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:24.070912  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:24.070931  480157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-661639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-661639/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-661639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:40:24.218544  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:40:24.218575  480157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:40:24.218605  480157 ubuntu.go:190] setting up certificates
	I1009 19:40:24.218622  480157 provision.go:84] configureAuth start
	I1009 19:40:24.218689  480157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:40:24.236821  480157 provision.go:143] copyHostCerts
	I1009 19:40:24.236892  480157 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:40:24.236905  480157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:40:24.236987  480157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:40:24.237104  480157 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:40:24.237116  480157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:40:24.237146  480157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:40:24.237215  480157 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:40:24.237226  480157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:40:24.237252  480157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:40:24.237317  480157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-661639 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-661639 localhost minikube]
	I1009 19:40:24.397287  480157 provision.go:177] copyRemoteCerts
	I1009 19:40:24.397364  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:40:24.397407  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.414108  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:24.517918  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:40:24.535684  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 19:40:24.553532  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:40:24.571179  480157 provision.go:87] duration metric: took 352.527985ms to configureAuth
	I1009 19:40:24.571208  480157 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:40:24.571389  480157 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:24.571510  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.588802  480157 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:24.589122  480157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1009 19:40:24.589144  480157 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:40:24.934053  480157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:40:24.934076  480157 machine.go:96] duration metric: took 4.24871416s to provisionDockerMachine
	I1009 19:40:24.934086  480157 client.go:171] duration metric: took 10.872407548s to LocalClient.Create
	I1009 19:40:24.934100  480157 start.go:167] duration metric: took 10.872481149s to libmachine.API.Create "default-k8s-diff-port-661639"
	I1009 19:40:24.934107  480157 start.go:293] postStartSetup for "default-k8s-diff-port-661639" (driver="docker")
	I1009 19:40:24.934117  480157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:40:24.934209  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:40:24.934263  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:24.953503  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.064060  480157 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:40:25.067951  480157 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:40:25.068113  480157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:40:25.068150  480157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:40:25.068234  480157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:40:25.068346  480157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:40:25.068469  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:40:25.077438  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:25.099272  480157 start.go:296] duration metric: took 165.148843ms for postStartSetup
	I1009 19:40:25.099739  480157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:40:25.126934  480157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/config.json ...
	I1009 19:40:25.127249  480157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:40:25.127302  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:25.145067  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.248164  480157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:40:25.253169  480157 start.go:128] duration metric: took 11.19519468s to createHost
	I1009 19:40:25.253196  480157 start.go:83] releasing machines lock for "default-k8s-diff-port-661639", held for 11.195327604s
	I1009 19:40:25.253271  480157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:40:25.270448  480157 ssh_runner.go:195] Run: cat /version.json
	I1009 19:40:25.270511  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:25.270792  480157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:40:25.270864  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:25.294426  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.299718  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:25.393929  480157 ssh_runner.go:195] Run: systemctl --version
	I1009 19:40:25.484710  480157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:40:25.521854  480157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:40:25.526176  480157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:40:25.526311  480157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:40:25.555732  480157 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:40:25.555767  480157 start.go:495] detecting cgroup driver to use...
	I1009 19:40:25.555804  480157 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:40:25.555868  480157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:40:25.573792  480157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:40:25.587117  480157 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:40:25.587184  480157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:40:25.605516  480157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:40:25.625768  480157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:40:25.746179  480157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:40:25.877610  480157 docker.go:234] disabling docker service ...
	I1009 19:40:25.877680  480157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:40:25.898629  480157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:40:25.911670  480157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:40:26.023831  480157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:40:26.135074  480157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:40:26.148302  480157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:40:26.165178  480157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:40:26.165246  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.174157  480157 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:40:26.174234  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.183237  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.192100  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.200514  480157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:40:26.208426  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.217279  480157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.230965  480157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:26.239791  480157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:40:26.247274  480157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:40:26.254824  480157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:26.359364  480157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:40:26.518221  480157 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:40:26.518317  480157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:40:26.523761  480157 start.go:563] Will wait 60s for crictl version
	I1009 19:40:26.523859  480157 ssh_runner.go:195] Run: which crictl
	I1009 19:40:26.529127  480157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:40:26.553888  480157 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:40:26.554056  480157 ssh_runner.go:195] Run: crio --version
	I1009 19:40:26.586608  480157 ssh_runner.go:195] Run: crio --version
	I1009 19:40:26.636967  480157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:40:23.101577  481360 out.go:252] * Restarting existing docker container for "embed-certs-779570" ...
	I1009 19:40:23.101669  481360 cli_runner.go:164] Run: docker start embed-certs-779570
	I1009 19:40:23.355375  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:23.377589  481360 kic.go:430] container "embed-certs-779570" state is running.
	I1009 19:40:23.377983  481360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-779570
	I1009 19:40:23.403303  481360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/config.json ...
	I1009 19:40:23.403553  481360 machine.go:93] provisionDockerMachine start ...
	I1009 19:40:23.403623  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:23.425489  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:23.426040  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:23.426057  481360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:40:23.426833  481360 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:40:26.573913  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-779570
	
	I1009 19:40:26.574002  481360 ubuntu.go:182] provisioning hostname "embed-certs-779570"
	I1009 19:40:26.574095  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:26.599982  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:26.600312  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:26.600324  481360 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-779570 && echo "embed-certs-779570" | sudo tee /etc/hostname
	I1009 19:40:26.768779  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-779570
	
	I1009 19:40:26.768896  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:26.790727  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:26.791035  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:26.791058  481360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-779570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-779570/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-779570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:40:26.946459  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:40:26.946538  481360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:40:26.946582  481360 ubuntu.go:190] setting up certificates
	I1009 19:40:26.946631  481360 provision.go:84] configureAuth start
	I1009 19:40:26.946717  481360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-779570
	I1009 19:40:26.971990  481360 provision.go:143] copyHostCerts
	I1009 19:40:26.972072  481360 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:40:26.972088  481360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:40:26.972172  481360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:40:26.972283  481360 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:40:26.972289  481360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:40:26.972326  481360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:40:26.972385  481360 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:40:26.972396  481360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:40:26.972421  481360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:40:26.972469  481360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.embed-certs-779570 san=[127.0.0.1 192.168.85.2 embed-certs-779570 localhost minikube]
	I1009 19:40:27.456687  481360 provision.go:177] copyRemoteCerts
	I1009 19:40:27.456820  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:40:27.456903  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:27.485385  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:27.588218  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:40:27.611548  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:40:27.633063  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:40:27.654017  481360 provision.go:87] duration metric: took 707.357328ms to configureAuth
	I1009 19:40:27.654068  481360 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:40:27.654289  481360 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:27.654424  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:27.674765  481360 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:27.675087  481360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1009 19:40:27.675102  481360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:40:26.639747  480157 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-661639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:40:26.667804  480157 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:40:26.672533  480157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:26.683723  480157 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:40:26.683842  480157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:40:26.683908  480157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:26.725690  480157 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:26.725717  480157 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:40:26.725776  480157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:26.753021  480157 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:26.753046  480157 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:40:26.753054  480157 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1009 19:40:26.753151  480157 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-661639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:40:26.753229  480157 ssh_runner.go:195] Run: crio config
	I1009 19:40:26.830632  480157 cni.go:84] Creating CNI manager for ""
	I1009 19:40:26.830657  480157 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:26.830675  480157 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:40:26.830698  480157 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-661639 NodeName:default-k8s-diff-port-661639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:40:26.830820  480157 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-661639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:40:26.830896  480157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:40:26.844397  480157 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:40:26.844472  480157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:40:26.852961  480157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1009 19:40:26.869624  480157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:40:26.888296  480157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1009 19:40:26.903562  480157 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:40:26.907527  480157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:26.917931  480157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:27.079343  480157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:27.097441  480157 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639 for IP: 192.168.76.2
	I1009 19:40:27.097463  480157 certs.go:195] generating shared ca certs ...
	I1009 19:40:27.097483  480157 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:27.097630  480157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:40:27.097722  480157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:40:27.097735  480157 certs.go:257] generating profile certs ...
	I1009 19:40:27.097793  480157 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key
	I1009 19:40:27.097818  480157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt with IP's: []
	I1009 19:40:28.401196  480157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt ...
	I1009 19:40:28.401273  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: {Name:mk651e23b2facd267582e16e7a4694b152f5962b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:28.401515  480157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key ...
	I1009 19:40:28.401559  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key: {Name:mkc71d476cb6ed1cdf5ce0926d86ca657e7349ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:28.401711  480157 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb
	I1009 19:40:28.401757  480157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 19:40:28.084248  481360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:40:28.084272  481360 machine.go:96] duration metric: took 4.680700732s to provisionDockerMachine
	I1009 19:40:28.084285  481360 start.go:293] postStartSetup for "embed-certs-779570" (driver="docker")
	I1009 19:40:28.084297  481360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:40:28.084372  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:40:28.084414  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.124931  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.227076  481360 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:40:28.231138  481360 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:40:28.231209  481360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:40:28.231236  481360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:40:28.231322  481360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:40:28.231448  481360 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:40:28.231593  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:40:28.239755  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:28.260099  481360 start.go:296] duration metric: took 175.798192ms for postStartSetup
	I1009 19:40:28.260219  481360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:40:28.260288  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.279899  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.379330  481360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:40:28.384752  481360 fix.go:56] duration metric: took 5.303843775s for fixHost
	I1009 19:40:28.384778  481360 start.go:83] releasing machines lock for "embed-certs-779570", held for 5.30389596s
	I1009 19:40:28.384854  481360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-779570
	I1009 19:40:28.403719  481360 ssh_runner.go:195] Run: cat /version.json
	I1009 19:40:28.403768  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.403799  481360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:40:28.403852  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:28.434976  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.466494  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:28.549986  481360 ssh_runner.go:195] Run: systemctl --version
	I1009 19:40:28.649699  481360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:40:28.728818  481360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:40:28.733537  481360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:40:28.733606  481360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:40:28.743637  481360 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:40:28.743657  481360 start.go:495] detecting cgroup driver to use...
	I1009 19:40:28.743696  481360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:40:28.743746  481360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:40:28.760001  481360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:40:28.774334  481360 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:40:28.774389  481360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:40:28.791061  481360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:40:28.805814  481360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:40:28.956021  481360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:40:29.109476  481360 docker.go:234] disabling docker service ...
	I1009 19:40:29.109541  481360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:40:29.127886  481360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:40:29.142454  481360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:40:29.278990  481360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:40:29.493888  481360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:40:29.511633  481360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:40:29.542091  481360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:40:29.542297  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.553916  481360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:40:29.553997  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.563633  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.573252  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.582705  481360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:40:29.591430  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.600618  481360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.609488  481360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:40:29.618910  481360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:40:29.627618  481360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:40:29.635849  481360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:29.771950  481360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:40:29.954623  481360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:40:29.954745  481360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:40:29.959182  481360 start.go:563] Will wait 60s for crictl version
	I1009 19:40:29.959284  481360 ssh_runner.go:195] Run: which crictl
	I1009 19:40:29.963092  481360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:40:29.993690  481360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:40:29.994118  481360 ssh_runner.go:195] Run: crio --version
	I1009 19:40:30.081891  481360 ssh_runner.go:195] Run: crio --version
	I1009 19:40:30.135647  481360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:40:29.432732  480157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb ...
	I1009 19:40:29.432823  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb: {Name:mkd3bff069ed34901ca13fb8944bdb0bb4f880e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.433062  480157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb ...
	I1009 19:40:29.433101  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb: {Name:mk29fc5943bf7c39c5bcf5094b244b536c1b64b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.433233  480157 certs.go:382] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt.6f8704fb -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt
	I1009 19:40:29.433358  480157 certs.go:386] copying /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb -> /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key
	I1009 19:40:29.433464  480157 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key
	I1009 19:40:29.433515  480157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt with IP's: []
	I1009 19:40:29.800143  480157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt ...
	I1009 19:40:29.800219  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt: {Name:mk05067dbfdfdcaa6698d49684961bc3a981883f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.800484  480157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key ...
	I1009 19:40:29.800531  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key: {Name:mkc12939803757aaa092914e87f0edc14672b1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:29.801828  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:40:29.801919  480157 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:40:29.801947  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:40:29.801999  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:40:29.802044  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:40:29.802095  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:40:29.802199  480157 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:29.802848  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:40:29.820746  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:40:29.841912  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:40:29.859466  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:40:29.876694  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:40:29.894949  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:40:29.919537  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:40:29.941484  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:40:29.965282  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:40:29.984316  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:40:30.008908  480157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:40:30.036266  480157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:40:30.053941  480157 ssh_runner.go:195] Run: openssl version
	I1009 19:40:30.062082  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:40:30.073850  480157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:40:30.079506  480157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:40:30.079682  480157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:40:30.139348  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:40:30.160037  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:40:30.178122  480157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.188345  480157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.188410  480157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.250032  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:40:30.259287  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:40:30.268628  480157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:30.273454  480157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:30.273526  480157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:30.315986  480157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:40:30.325085  480157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:40:30.329380  480157 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:40:30.329435  480157 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:30.329509  480157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:40:30.329564  480157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:40:30.359866  480157 cri.go:89] found id: ""
	I1009 19:40:30.359942  480157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:40:30.370320  480157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:40:30.378698  480157 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:40:30.378768  480157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:40:30.389633  480157 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:40:30.389652  480157 kubeadm.go:157] found existing configuration files:
	
	I1009 19:40:30.389707  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 19:40:30.398971  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:40:30.399044  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:40:30.407216  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 19:40:30.415391  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:40:30.415450  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:40:30.423319  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 19:40:30.432541  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:40:30.432605  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:40:30.440201  480157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 19:40:30.449318  480157 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:40:30.449391  480157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:40:30.460285  480157 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:40:30.528529  480157 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:40:30.528864  480157 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:40:30.553257  480157 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:40:30.553336  480157 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:40:30.553380  480157 kubeadm.go:318] OS: Linux
	I1009 19:40:30.553432  480157 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:40:30.553487  480157 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:40:30.553539  480157 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:40:30.553594  480157 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:40:30.553651  480157 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:40:30.553721  480157 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:40:30.553775  480157 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:40:30.553839  480157 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:40:30.553892  480157 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:40:30.654657  480157 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:40:30.654802  480157 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:40:30.654910  480157 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:40:30.678456  480157 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:40:30.138735  481360 cli_runner.go:164] Run: docker network inspect embed-certs-779570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:40:30.172456  481360 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:40:30.176552  481360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:30.192954  481360 kubeadm.go:883] updating cluster {Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:40:30.193089  481360 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:40:30.193146  481360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:30.243029  481360 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:30.243103  481360 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:40:30.243178  481360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:30.278274  481360 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:40:30.278300  481360 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:40:30.278308  481360 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:40:30.278412  481360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-779570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:40:30.278488  481360 ssh_runner.go:195] Run: crio config
	I1009 19:40:30.344900  481360 cni.go:84] Creating CNI manager for ""
	I1009 19:40:30.344921  481360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:30.344942  481360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:40:30.344965  481360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-779570 NodeName:embed-certs-779570 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:40:30.345093  481360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-779570"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:40:30.345161  481360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:40:30.354547  481360 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:40:30.354610  481360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:40:30.364947  481360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 19:40:30.382536  481360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:40:30.399313  481360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 19:40:30.414066  481360 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:40:30.418760  481360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:30.429988  481360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:30.569655  481360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:30.590575  481360 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570 for IP: 192.168.85.2
	I1009 19:40:30.590633  481360 certs.go:195] generating shared ca certs ...
	I1009 19:40:30.590674  481360 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:30.590868  481360 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:40:30.590956  481360 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:40:30.590982  481360 certs.go:257] generating profile certs ...
	I1009 19:40:30.591116  481360 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/client.key
	I1009 19:40:30.591223  481360 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/apiserver.key.b138eccb
	I1009 19:40:30.591299  481360 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/proxy-client.key
	I1009 19:40:30.591457  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:40:30.591523  481360 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:40:30.591548  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:40:30.591606  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:40:30.591671  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:40:30.591717  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:40:30.591795  481360 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:40:30.592663  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:40:30.612801  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:40:30.633012  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:40:30.653463  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:40:30.673732  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 19:40:30.694889  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:40:30.713448  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:40:30.745017  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/embed-certs-779570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:40:30.777633  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:40:30.811908  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:40:30.859163  481360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:40:30.906909  481360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:40:30.927608  481360 ssh_runner.go:195] Run: openssl version
	I1009 19:40:30.946594  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:40:30.955717  481360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.965539  481360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:40:30.965648  481360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:40:31.035625  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:40:31.058321  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:40:31.067735  481360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:31.073187  481360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:31.073308  481360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:31.116512  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:40:31.135301  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:40:31.145014  481360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:40:31.151601  481360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:40:31.151740  481360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:40:31.194708  481360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:40:31.203561  481360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:40:31.207976  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:40:31.251640  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:40:31.293646  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:40:31.336537  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:40:31.380212  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:40:31.439278  481360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:40:31.541805  481360 kubeadm.go:400] StartCluster: {Name:embed-certs-779570 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-779570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:31.541942  481360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:40:31.542037  481360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:40:31.690601  481360 cri.go:89] found id: "bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df"
	I1009 19:40:31.690683  481360 cri.go:89] found id: ""
	I1009 19:40:31.690772  481360 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:40:31.761236  481360 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:40:31Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:40:31.761427  481360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:40:31.804955  481360 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:40:31.805033  481360 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:40:31.805147  481360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:40:31.834466  481360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:40:31.835054  481360 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-779570" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:31.835258  481360 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-779570" cluster setting kubeconfig missing "embed-certs-779570" context setting]
	I1009 19:40:31.835652  481360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:31.837550  481360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:40:31.859518  481360 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:40:31.859617  481360 kubeadm.go:601] duration metric: took 54.543706ms to restartPrimaryControlPlane
	I1009 19:40:31.859643  481360 kubeadm.go:402] duration metric: took 317.859868ms to StartCluster
	I1009 19:40:31.859691  481360 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:31.859786  481360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:31.861009  481360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:31.861371  481360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:40:31.861930  481360 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:40:31.862017  481360 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-779570"
	I1009 19:40:31.862032  481360 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-779570"
	W1009 19:40:31.862039  481360 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:40:31.862068  481360 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:40:31.862645  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.863025  481360 config.go:182] Loaded profile config "embed-certs-779570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:31.863117  481360 addons.go:69] Setting dashboard=true in profile "embed-certs-779570"
	I1009 19:40:31.863159  481360 addons.go:238] Setting addon dashboard=true in "embed-certs-779570"
	W1009 19:40:31.863193  481360 addons.go:247] addon dashboard should already be in state true
	I1009 19:40:31.863231  481360 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:40:31.863739  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.864321  481360 addons.go:69] Setting default-storageclass=true in profile "embed-certs-779570"
	I1009 19:40:31.864351  481360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-779570"
	I1009 19:40:31.864654  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.869548  481360 out.go:179] * Verifying Kubernetes components...
	I1009 19:40:31.872836  481360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:31.916255  481360 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:40:31.919811  481360 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:40:31.924192  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:40:31.924225  481360 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:40:31.924295  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:31.936221  481360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:40:31.940078  481360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:31.940108  481360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:40:31.940177  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:31.946635  481360 addons.go:238] Setting addon default-storageclass=true in "embed-certs-779570"
	W1009 19:40:31.946659  481360 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:40:31.946683  481360 host.go:66] Checking if "embed-certs-779570" exists ...
	I1009 19:40:31.947202  481360 cli_runner.go:164] Run: docker container inspect embed-certs-779570 --format={{.State.Status}}
	I1009 19:40:31.982384  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:32.008294  481360 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:32.008316  481360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:40:32.008392  481360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-779570
	I1009 19:40:32.011321  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:32.038628  481360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/embed-certs-779570/id_rsa Username:docker}
	I1009 19:40:32.381528  481360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:32.471921  481360 node_ready.go:35] waiting up to 6m0s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:40:32.504798  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:40:32.504821  481360 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:40:32.534327  481360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:32.535335  481360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:32.591865  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:40:32.591932  481360 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:40:32.742649  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:40:32.742676  481360 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:40:32.815250  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:40:32.815274  481360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:40:30.684080  480157 out.go:252]   - Generating certificates and keys ...
	I1009 19:40:30.684238  480157 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:40:30.684338  480157 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:40:32.374393  480157 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:40:33.018633  480157 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:40:33.168967  480157 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:40:32.987283  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:40:32.987311  481360 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:40:33.083458  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:40:33.083520  481360 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:40:33.117223  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:40:33.117287  481360 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:40:33.148661  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:40:33.148724  481360 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:40:33.168700  481360 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:40:33.168763  481360 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:40:33.195929  481360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:40:34.256501  480157 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:40:35.147801  480157 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:40:35.148455  480157 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-661639 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:40:35.460544  480157 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:40:35.462516  480157 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-661639 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 19:40:35.639404  480157 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:40:35.834454  480157 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:40:36.230245  480157 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:40:36.230761  480157 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:40:38.144805  480157 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:40:38.669265  480157 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:40:38.788433  480157 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:40:39.080769  480157 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:40:39.929727  480157 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:40:39.931857  480157 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:40:39.934847  480157 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:40:41.123476  481360 node_ready.go:49] node "embed-certs-779570" is "Ready"
	I1009 19:40:41.123560  481360 node_ready.go:38] duration metric: took 8.651604376s for node "embed-certs-779570" to be "Ready" ...
	I1009 19:40:41.123589  481360 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:40:41.123682  481360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:40:41.376431  481360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.842071872s)
	I1009 19:40:39.938308  480157 out.go:252]   - Booting up control plane ...
	I1009 19:40:39.938407  480157 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:40:39.938489  480157 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:40:39.939562  480157 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:40:39.961006  480157 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:40:39.961118  480157 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:40:39.969759  480157 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:40:39.969862  480157 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:40:39.969919  480157 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:40:40.197592  480157 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:40:40.197717  480157 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:40:41.202614  480157 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00169588s
	I1009 19:40:41.202727  480157 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:40:41.202812  480157 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1009 19:40:41.202906  480157 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:40:41.202988  480157 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:40:44.160796  481360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.625432102s)
	I1009 19:40:44.160918  481360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.964913062s)
	I1009 19:40:44.161054  481360 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.037334351s)
	I1009 19:40:44.161074  481360 api_server.go:72] duration metric: took 12.299621864s to wait for apiserver process to appear ...
	I1009 19:40:44.161084  481360 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:40:44.161102  481360 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:40:44.163947  481360 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-779570 addons enable metrics-server
	
	I1009 19:40:44.166785  481360 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1009 19:40:44.169597  481360 addons.go:514] duration metric: took 12.307639181s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1009 19:40:44.175129  481360 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:40:44.176200  481360 api_server.go:141] control plane version: v1.34.1
	I1009 19:40:44.176226  481360 api_server.go:131] duration metric: took 15.134505ms to wait for apiserver health ...
	I1009 19:40:44.176236  481360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:40:44.191806  481360 system_pods.go:59] 8 kube-system pods found
	I1009 19:40:44.191845  481360 system_pods.go:61] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:40:44.191856  481360 system_pods.go:61] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:40:44.191862  481360 system_pods.go:61] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:40:44.191869  481360 system_pods.go:61] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:40:44.191878  481360 system_pods.go:61] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:40:44.191883  481360 system_pods.go:61] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:40:44.191891  481360 system_pods.go:61] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:40:44.191902  481360 system_pods.go:61] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:40:44.191908  481360 system_pods.go:74] duration metric: took 15.666081ms to wait for pod list to return data ...
	I1009 19:40:44.191920  481360 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:40:44.196197  481360 default_sa.go:45] found service account: "default"
	I1009 19:40:44.196222  481360 default_sa.go:55] duration metric: took 4.295886ms for default service account to be created ...
	I1009 19:40:44.196232  481360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:40:44.204360  481360 system_pods.go:86] 8 kube-system pods found
	I1009 19:40:44.204396  481360 system_pods.go:89] "coredns-66bc5c9577-4c9xb" [e2529ef6-950b-4a93-8a58-05ced011aec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:40:44.204405  481360 system_pods.go:89] "etcd-embed-certs-779570" [398da62a-c291-4ff0-8c6c-2d67cf58216f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:40:44.204412  481360 system_pods.go:89] "kindnet-lgfbl" [45264249-abcf-4cfc-b842-d97424fc53be] Running
	I1009 19:40:44.204418  481360 system_pods.go:89] "kube-apiserver-embed-certs-779570" [043aa13e-cd36-440b-a0cf-623de02dffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:40:44.204425  481360 system_pods.go:89] "kube-controller-manager-embed-certs-779570" [3541847e-2154-4aff-9087-bb65f54e52f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:40:44.204430  481360 system_pods.go:89] "kube-proxy-sp4bk" [48274b49-cb31-48c3-96c8-a187d4e6000b] Running
	I1009 19:40:44.204437  481360 system_pods.go:89] "kube-scheduler-embed-certs-779570" [496e81ae-ad11-4c48-8dc7-e9fdf55437eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:40:44.204441  481360 system_pods.go:89] "storage-provisioner" [2cdc2193-2900-4e63-a482-40739fe08704] Running
	I1009 19:40:44.204449  481360 system_pods.go:126] duration metric: took 8.211395ms to wait for k8s-apps to be running ...
	I1009 19:40:44.204464  481360 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:40:44.204532  481360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:40:44.228761  481360 system_svc.go:56] duration metric: took 24.288582ms WaitForService to wait for kubelet
	I1009 19:40:44.228789  481360 kubeadm.go:586] duration metric: took 12.367334696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:40:44.228808  481360 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:40:44.236597  481360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:40:44.236627  481360 node_conditions.go:123] node cpu capacity is 2
	I1009 19:40:44.236641  481360 node_conditions.go:105] duration metric: took 7.82689ms to run NodePressure ...
	I1009 19:40:44.236654  481360 start.go:241] waiting for startup goroutines ...
	I1009 19:40:44.236661  481360 start.go:246] waiting for cluster config update ...
	I1009 19:40:44.236674  481360 start.go:255] writing updated cluster config ...
	I1009 19:40:44.236953  481360 ssh_runner.go:195] Run: rm -f paused
	I1009 19:40:44.247976  481360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:40:44.251689  481360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:40:46.300706  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:40:46.028190  480157 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.824171766s
	I1009 19:40:49.493047  480157 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.28961737s
	I1009 19:40:51.706655  480157 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.503011829s
	I1009 19:40:51.731847  480157 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:40:51.747214  480157 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:40:51.766422  480157 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:40:51.766932  480157 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-661639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:40:51.783988  480157 kubeadm.go:318] [bootstrap-token] Using token: is484r.azrjmlmvdylfauu1
	W1009 19:40:48.757473  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:40:50.757819  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:40:52.759128  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:40:51.787113  480157 out.go:252]   - Configuring RBAC rules ...
	I1009 19:40:51.787241  480157 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:40:51.797741  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:40:51.822020  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:40:51.834255  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:40:51.839719  480157 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:40:51.856528  480157 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:40:52.121496  480157 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:40:52.616162  480157 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:40:53.118371  480157 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:40:53.120064  480157 kubeadm.go:318] 
	I1009 19:40:53.120147  480157 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:40:53.120158  480157 kubeadm.go:318] 
	I1009 19:40:53.120247  480157 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:40:53.120257  480157 kubeadm.go:318] 
	I1009 19:40:53.120285  480157 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:40:53.120352  480157 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:40:53.120410  480157 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:40:53.120419  480157 kubeadm.go:318] 
	I1009 19:40:53.120476  480157 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:40:53.120485  480157 kubeadm.go:318] 
	I1009 19:40:53.120536  480157 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:40:53.120544  480157 kubeadm.go:318] 
	I1009 19:40:53.120600  480157 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:40:53.120683  480157 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:40:53.120759  480157 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:40:53.120768  480157 kubeadm.go:318] 
	I1009 19:40:53.120864  480157 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:40:53.120950  480157 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:40:53.120977  480157 kubeadm.go:318] 
	I1009 19:40:53.121071  480157 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token is484r.azrjmlmvdylfauu1 \
	I1009 19:40:53.121207  480157 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:40:53.121237  480157 kubeadm.go:318] 	--control-plane 
	I1009 19:40:53.121247  480157 kubeadm.go:318] 
	I1009 19:40:53.121337  480157 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:40:53.121345  480157 kubeadm.go:318] 
	I1009 19:40:53.121432  480157 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token is484r.azrjmlmvdylfauu1 \
	I1009 19:40:53.121543  480157 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:40:53.125569  480157 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:40:53.125845  480157 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:40:53.125992  480157 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:40:53.126027  480157 cni.go:84] Creating CNI manager for ""
	I1009 19:40:53.126036  480157 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:40:53.129471  480157 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:40:53.132408  480157 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:40:53.137336  480157 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:40:53.137356  480157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:40:53.172401  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:40:53.661103  480157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:40:53.661254  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:53.661330  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-661639 minikube.k8s.io/updated_at=2025_10_09T19_40_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=default-k8s-diff-port-661639 minikube.k8s.io/primary=true
	W1009 19:40:55.260263  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:40:57.760178  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:40:54.041422  480157 ops.go:34] apiserver oom_adj: -16
	I1009 19:40:54.041536  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:54.541626  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:55.042091  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:55.542219  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:56.041697  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:56.541744  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:57.042490  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:57.542097  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:58.042397  480157 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:40:58.315779  480157 kubeadm.go:1113] duration metric: took 4.654576231s to wait for elevateKubeSystemPrivileges
	I1009 19:40:58.315809  480157 kubeadm.go:402] duration metric: took 27.986377566s to StartCluster
	I1009 19:40:58.315828  480157 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:58.315893  480157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:40:58.317530  480157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:58.317778  480157 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:40:58.318010  480157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:40:58.318178  480157 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:40:58.318268  480157 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-661639"
	I1009 19:40:58.318293  480157 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-661639"
	I1009 19:40:58.318319  480157 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:40:58.318432  480157 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:40:58.318497  480157 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-661639"
	I1009 19:40:58.318538  480157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-661639"
	I1009 19:40:58.318855  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:58.318857  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:58.323183  480157 out.go:179] * Verifying Kubernetes components...
	I1009 19:40:58.330097  480157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:58.356060  480157 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:40:58.357702  480157 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-661639"
	I1009 19:40:58.357740  480157 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:40:58.359210  480157 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:58.359229  480157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:40:58.359303  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:58.359524  480157 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:40:58.402797  480157 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:58.402818  480157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:40:58.402887  480157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:40:58.404759  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:58.431744  480157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:40:58.864248  480157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:40:59.143505  480157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:40:59.196278  480157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:40:59.196401  480157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:41:00.400024  480157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256480726s)
	I1009 19:41:00.400303  480157 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.203879405s)
	I1009 19:41:00.400622  480157 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.204313191s)
	I1009 19:41:00.400651  480157 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 19:41:00.403605  480157 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1009 19:41:00.270275  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:02.757932  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:41:00.404404  480157 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-661639" to be "Ready" ...
	I1009 19:41:00.406776  480157 addons.go:514] duration metric: took 2.088575741s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 19:41:00.905194  480157 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-661639" context rescaled to 1 replicas
	W1009 19:41:02.407242  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:05.256792  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:07.257345  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:04.407551  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:06.407805  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:09.757830  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:12.256944  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:08.908367  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:11.407169  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:13.407428  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:14.256997  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:16.257126  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:15.407699  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:17.907270  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:18.757445  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:20.757661  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	W1009 19:41:20.407810  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:22.407855  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:23.257314  481360 pod_ready.go:104] pod "coredns-66bc5c9577-4c9xb" is not "Ready", error: <nil>
	I1009 19:41:24.757768  481360 pod_ready.go:94] pod "coredns-66bc5c9577-4c9xb" is "Ready"
	I1009 19:41:24.757799  481360 pod_ready.go:86] duration metric: took 40.506085316s for pod "coredns-66bc5c9577-4c9xb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.760533  481360 pod_ready.go:83] waiting for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.765081  481360 pod_ready.go:94] pod "etcd-embed-certs-779570" is "Ready"
	I1009 19:41:24.765103  481360 pod_ready.go:86] duration metric: took 4.545228ms for pod "etcd-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.767501  481360 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.772481  481360 pod_ready.go:94] pod "kube-apiserver-embed-certs-779570" is "Ready"
	I1009 19:41:24.772560  481360 pod_ready.go:86] duration metric: took 5.031042ms for pod "kube-apiserver-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.775062  481360 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:24.956480  481360 pod_ready.go:94] pod "kube-controller-manager-embed-certs-779570" is "Ready"
	I1009 19:41:24.956507  481360 pod_ready.go:86] duration metric: took 181.416344ms for pod "kube-controller-manager-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:25.156320  481360 pod_ready.go:83] waiting for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:25.555664  481360 pod_ready.go:94] pod "kube-proxy-sp4bk" is "Ready"
	I1009 19:41:25.555690  481360 pod_ready.go:86] duration metric: took 399.339973ms for pod "kube-proxy-sp4bk" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:25.755977  481360 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:26.156611  481360 pod_ready.go:94] pod "kube-scheduler-embed-certs-779570" is "Ready"
	I1009 19:41:26.156639  481360 pod_ready.go:86] duration metric: took 400.633273ms for pod "kube-scheduler-embed-certs-779570" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:41:26.156652  481360 pod_ready.go:40] duration metric: took 41.908634821s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:41:26.212305  481360 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:41:26.215550  481360 out.go:179] * Done! kubectl is now configured to use "embed-certs-779570" cluster and "default" namespace by default
	W1009 19:41:24.407937  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:26.907333  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:28.907997  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:31.407972  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:33.908001  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:36.407810  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	W1009 19:41:38.408430  480157 node_ready.go:57] node "default-k8s-diff-port-661639" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.019442235Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e140e878-1a2f-443e-b567-c165976647e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.02227801Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9aee8573-fa24-4ca5-aec0-97a586632629 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.024033469Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper" id=966b3784-910f-4e6c-a79a-4ab2a947a95d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.024337891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.038352155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.039266159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.057280815Z" level=info msg="Created container 446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper" id=966b3784-910f-4e6c-a79a-4ab2a947a95d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.058027064Z" level=info msg="Starting container: 446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f" id=ea850075-c2a0-46f8-87ac-76a3ed07350f name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.061913223Z" level=info msg="Started container" PID=1640 containerID=446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper id=ea850075-c2a0-46f8-87ac-76a3ed07350f name=/runtime.v1.RuntimeService/StartContainer sandboxID=b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211
	Oct 09 19:41:21 embed-certs-779570 conmon[1638]: conmon 446b9977cb579388c116 <ninfo>: container 1640 exited with status 1
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.41742994Z" level=info msg="Removing container: f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241" id=d5932865-955c-44dc-8645-6487946ff858 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.428135076Z" level=info msg="Error loading conmon cgroup of container f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241: cgroup deleted" id=d5932865-955c-44dc-8645-6487946ff858 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:41:21 embed-certs-779570 crio[653]: time="2025-10-09T19:41:21.431252488Z" level=info msg="Removed container f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj/dashboard-metrics-scraper" id=d5932865-955c-44dc-8645-6487946ff858 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.523222849Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.527616058Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.527653039Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.52767539Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.531185143Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.531222714Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.531244606Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.534606098Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.534638493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.534662362Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.538643652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:41:23 embed-certs-779570 crio[653]: time="2025-10-09T19:41:23.538681142Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	446b9977cb579       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   b73f8f15a6a96       dashboard-metrics-scraper-6ffb444bf9-8ghrj   kubernetes-dashboard
	fc0c3ef4a639c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   9ae854d822fbc       storage-provisioner                          kube-system
	be2d4760bae31       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   8cf59713c1912       kubernetes-dashboard-855c9754f9-dm67w        kubernetes-dashboard
	1d138f9143886       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   6aa7c5f6ae46c       busybox                                      default
	e12f4daf9424d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   5d1239ff91c87       kindnet-lgfbl                                kube-system
	54d87d820dc19       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   a88fe5b829d0a       coredns-66bc5c9577-4c9xb                     kube-system
	a30da166d076a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   fb50dd6d1300b       kube-proxy-sp4bk                             kube-system
	8eab2000543c1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   9ae854d822fbc       storage-provisioner                          kube-system
	e31682d081500       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   448b044326a74       kube-scheduler-embed-certs-779570            kube-system
	17c33253b376c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   51b1971357133       etcd-embed-certs-779570                      kube-system
	4be19799344c6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2bf93d519cbdb       kube-controller-manager-embed-certs-779570   kube-system
	bd786fb308618       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   826c363f38f07       kube-apiserver-embed-certs-779570            kube-system
	
	
	==> coredns [54d87d820dc1938f8d34bfd342416dac5f2adf821653498270e0a72d6b35d5f4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34104 - 44765 "HINFO IN 5734484300843791744.4217364063665467189. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010769866s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-779570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-779570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=embed-certs-779570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_39_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-779570
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:41:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:41:01 +0000   Thu, 09 Oct 2025 19:39:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-779570
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 14d3a0aafc5647aa9ddf97cb58a3e9e0
	  System UUID:                1e5d6a7e-cdd6-479d-b40f-96791041c4dd
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-4c9xb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m31s
	  kube-system                 etcd-embed-certs-779570                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m36s
	  kube-system                 kindnet-lgfbl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m31s
	  kube-system                 kube-apiserver-embed-certs-779570             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-controller-manager-embed-certs-779570    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-sp4bk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-scheduler-embed-certs-779570             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8ghrj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dm67w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m28s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x8 over 2m48s)  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m36s                  kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s                  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m36s                  kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m32s                  node-controller  Node embed-certs-779570 event: Registered Node embed-certs-779570 in Controller
	  Normal   NodeReady                108s                   kubelet          Node embed-certs-779570 status is now: NodeReady
	  Normal   Starting                 73s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node embed-certs-779570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node embed-certs-779570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)      kubelet          Node embed-certs-779570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node embed-certs-779570 event: Registered Node embed-certs-779570 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [17c33253b376c0b387ae7ebe4e58be315318a5622f757e30efdd1a57e6553e7d] <==
	{"level":"warn","ts":"2025-10-09T19:40:39.057157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.108176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.143162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.178341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.222340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.279635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.324020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.346901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.370871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.403765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.421595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.466244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.479181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.528544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.568505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.598604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.630492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.646344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.691640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.714563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.748324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.781495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.799076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.826181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:39.904882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:43 up  2:24,  0 user,  load average: 2.87, 3.04, 2.43
	Linux embed-certs-779570 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e12f4daf9424db3be061381fcf8b34688e94a433a3c0e1bca9a0641e37f02174] <==
	I1009 19:40:43.312534       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:40:43.312760       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:40:43.312891       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:40:43.312903       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:40:43.312913       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:40:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:40:43.527419       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:40:43.527436       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:40:43.527444       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:40:43.527555       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:41:13.527665       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:41:13.527665       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:41:13.527764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:41:13.527821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 19:41:14.927915       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:41:14.927957       1 metrics.go:72] Registering metrics
	I1009 19:41:14.928057       1 controller.go:711] "Syncing nftables rules"
	I1009 19:41:23.522890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:41:23.522943       1 main.go:301] handling current node
	I1009 19:41:33.529165       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:41:33.529204       1 main.go:301] handling current node
	I1009 19:41:43.530476       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 19:41:43.530506       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd786fb30861863193a83b2938180673280745718306b229493eaee4d7f1c6df] <==
	I1009 19:40:41.220655       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:40:41.231751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:40:41.245171       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:40:41.250157       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:40:41.281136       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:40:41.292508       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:40:41.292556       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:40:41.292758       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:40:41.292908       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:40:41.292977       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 19:40:41.293067       1 aggregator.go:171] initial CRD sync complete...
	I1009 19:40:41.293077       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:40:41.293081       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:40:41.293086       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:40:41.863967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:40:42.073217       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:40:43.575315       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:40:43.744674       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:40:43.852260       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:40:43.870523       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:40:44.040475       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.49.130"}
	I1009 19:40:44.076186       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.156.159"}
	I1009 19:40:45.879457       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:40:46.161164       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:40:46.243044       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4be19799344c62649bd6f8d67821e8145a7756b618a2eef2982c64fd4b30a0c8] <==
	I1009 19:40:45.619304       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:40:45.619356       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:40:45.619610       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:40:45.621322       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:40:45.634222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:40:45.654202       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:40:45.654204       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 19:40:45.654223       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:40:45.678256       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:40:45.678298       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:40:45.678334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:40:45.678373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:40:45.678408       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:40:45.678435       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:40:45.678645       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:40:45.679475       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:40:45.680937       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:40:45.680971       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:40:45.682385       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:40:45.693982       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:40:45.702535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:40:45.758731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:40:45.758763       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:40:45.758777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:40:45.782512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a30da166d076a40602ea5309119d43b6346a615ba7729aabad0cf470c756b482] <==
	I1009 19:40:43.561323       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:40:43.739026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:40:43.877646       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:40:43.879698       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:40:43.879790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:40:44.063917       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:40:44.063971       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:40:44.085871       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:40:44.093019       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:40:44.093050       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:40:44.106465       1 config.go:200] "Starting service config controller"
	I1009 19:40:44.106483       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:40:44.106518       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:40:44.106522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:40:44.106534       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:40:44.106538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:40:44.107175       1 config.go:309] "Starting node config controller"
	I1009 19:40:44.107184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:40:44.107191       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:40:44.207362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:40:44.207460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:40:44.207486       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e31682d081500642ab41d785eae95bb338cc60ecad8ebf0b9e2c526d9258fe13] <==
	I1009 19:40:38.524402       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:40:41.351613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:40:41.351654       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:40:41.371103       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:40:41.371210       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:40:41.371236       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:40:41.371260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:40:41.373371       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:40:41.373402       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:40:41.373423       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:40:41.373429       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:40:41.482837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:40:41.483264       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:40:41.483343       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.326478     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsg7h\" (UniqueName: \"kubernetes.io/projected/2d517ad9-c456-40c1-ae85-a137f48a5f5e-kube-api-access-dsg7h\") pod \"kubernetes-dashboard-855c9754f9-dm67w\" (UID: \"2d517ad9-c456-40c1-ae85-a137f48a5f5e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dm67w"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.329030     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m9rd\" (UniqueName: \"kubernetes.io/projected/d3fa6474-3aea-4647-928f-bd921690d575-kube-api-access-6m9rd\") pod \"dashboard-metrics-scraper-6ffb444bf9-8ghrj\" (UID: \"d3fa6474-3aea-4647-928f-bd921690d575\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.329236     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2d517ad9-c456-40c1-ae85-a137f48a5f5e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dm67w\" (UID: \"2d517ad9-c456-40c1-ae85-a137f48a5f5e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dm67w"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: I1009 19:40:46.329343     779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6474-3aea-4647-928f-bd921690d575-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8ghrj\" (UID: \"d3fa6474-3aea-4647-928f-bd921690d575\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj"
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: W1009 19:40:46.538292     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/crio-8cf59713c1912d2e6a9b209eee9c3b1965eacbdc9baef200dcae3738e69a7744 WatchSource:0}: Error finding container 8cf59713c1912d2e6a9b209eee9c3b1965eacbdc9baef200dcae3738e69a7744: Status 404 returned error can't find the container with id 8cf59713c1912d2e6a9b209eee9c3b1965eacbdc9baef200dcae3738e69a7744
	Oct 09 19:40:46 embed-certs-779570 kubelet[779]: W1009 19:40:46.569778     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/81a5b0bcbd3e597c85858ef0100440598fd9f8b081dd0e78f059b4c89643e7d0/crio-b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211 WatchSource:0}: Error finding container b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211: Status 404 returned error can't find the container with id b73f8f15a6a96076a7b14b7e0585da90662e0bbb3972e1710dd325d6b7c9e211
	Oct 09 19:40:53 embed-certs-779570 kubelet[779]: I1009 19:40:53.338535     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dm67w" podStartSLOduration=0.696959997 podStartE2EDuration="7.338518365s" podCreationTimestamp="2025-10-09 19:40:46 +0000 UTC" firstStartedPulling="2025-10-09 19:40:46.541897346 +0000 UTC m=+15.952288492" lastFinishedPulling="2025-10-09 19:40:53.183455632 +0000 UTC m=+22.593846860" observedRunningTime="2025-10-09 19:40:53.335162871 +0000 UTC m=+22.745554034" watchObservedRunningTime="2025-10-09 19:40:53.338518365 +0000 UTC m=+22.748909511"
	Oct 09 19:41:00 embed-certs-779570 kubelet[779]: I1009 19:41:00.355793     779 scope.go:117] "RemoveContainer" containerID="0260b5189e2ff6224db807004f4be76c28292e028821d6f93baa07f451d2d990"
	Oct 09 19:41:01 embed-certs-779570 kubelet[779]: I1009 19:41:01.354600     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:01 embed-certs-779570 kubelet[779]: E1009 19:41:01.354943     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:01 embed-certs-779570 kubelet[779]: I1009 19:41:01.355117     779 scope.go:117] "RemoveContainer" containerID="0260b5189e2ff6224db807004f4be76c28292e028821d6f93baa07f451d2d990"
	Oct 09 19:41:02 embed-certs-779570 kubelet[779]: I1009 19:41:02.358846     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:02 embed-certs-779570 kubelet[779]: E1009 19:41:02.359005     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:06 embed-certs-779570 kubelet[779]: I1009 19:41:06.499050     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:06 embed-certs-779570 kubelet[779]: E1009 19:41:06.499231     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:13 embed-certs-779570 kubelet[779]: I1009 19:41:13.385070     779 scope.go:117] "RemoveContainer" containerID="8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: I1009 19:41:21.018426     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: I1009 19:41:21.410630     779 scope.go:117] "RemoveContainer" containerID="f5bb92e77083e373d3690c12ed1112b069b597281117df180726bc1ffec61241"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: I1009 19:41:21.410991     779 scope.go:117] "RemoveContainer" containerID="446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	Oct 09 19:41:21 embed-certs-779570 kubelet[779]: E1009 19:41:21.411200     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:26 embed-certs-779570 kubelet[779]: I1009 19:41:26.505673     779 scope.go:117] "RemoveContainer" containerID="446b9977cb579388c1161f94982069121b231f6ce981e0bf871d03448f13d76f"
	Oct 09 19:41:26 embed-certs-779570 kubelet[779]: E1009 19:41:26.505894     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8ghrj_kubernetes-dashboard(d3fa6474-3aea-4647-928f-bd921690d575)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8ghrj" podUID="d3fa6474-3aea-4647-928f-bd921690d575"
	Oct 09 19:41:38 embed-certs-779570 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:41:38 embed-certs-779570 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:41:38 embed-certs-779570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [be2d4760bae314d76c51cc9122f7ba323e293e47592bbcb36827264e3fac02c6] <==
	2025/10/09 19:40:53 Using namespace: kubernetes-dashboard
	2025/10/09 19:40:53 Using in-cluster config to connect to apiserver
	2025/10/09 19:40:53 Using secret token for csrf signing
	2025/10/09 19:40:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:40:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:40:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 19:40:53 Generating JWE encryption key
	2025/10/09 19:40:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:40:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:40:55 Initializing JWE encryption key from synchronized object
	2025/10/09 19:40:55 Creating in-cluster Sidecar client
	2025/10/09 19:40:55 Serving insecurely on HTTP port: 9090
	2025/10/09 19:40:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:41:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:40:53 Starting overwatch
	
	
	==> storage-provisioner [8eab2000543c1d27f928d4c888318e070735d4233b4da58e8cfc2fd633fb79f3] <==
	I1009 19:40:42.739220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:41:12.740570       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fc0c3ef4a639ccbc3d6ed8d36520f8322003481b7ac29e100ae0450982064103] <==
	W1009 19:41:13.465221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:16.920554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:21.181307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:24.779813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:27.833997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:30.855697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:30.860659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:41:30.860923       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:41:30.861099       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-779570_346f8794-c177-449a-ad4d-c6ab0738ea79!
	I1009 19:41:30.861344       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba6ad425-7ecb-45c9-9bf0-c63c463c7246", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-779570_346f8794-c177-449a-ad4d-c6ab0738ea79 became leader
	W1009 19:41:30.864917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:30.873423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:41:30.961961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-779570_346f8794-c177-449a-ad4d-c6ab0738ea79!
	W1009 19:41:32.876416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:32.881144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:34.884364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:34.891177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:36.894681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:36.900353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:38.903568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:38.911110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:40.914969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:40.923408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:42.929883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:42.937652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-779570 -n embed-certs-779570
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-779570 -n embed-certs-779570: exit status 2 (388.037ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-779570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (393.289847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:41:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-661639 describe deploy/metrics-server -n kube-system: exit status 1 (85.606067ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-661639 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-661639
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-661639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438",
	        "Created": "2025-10-09T19:40:19.30361096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:40:19.365215152Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/hostname",
	        "HostsPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/hosts",
	        "LogPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438-json.log",
	        "Name": "/default-k8s-diff-port-661639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-661639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-661639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438",
	                "LowerDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-661639",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-661639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-661639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-661639",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-661639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "893662550c0db2a279a1b391efc5bfad75e23cab622b239be63b88cb102ad6ed",
	            "SandboxKey": "/var/run/docker/netns/893662550c0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-661639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:c6:e9:11:dd:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86fdc851eb8ad7fec0353db31405ee8fa251cbc2c81dd836e7fbb59e4102b63e",
	                    "EndpointID": "3491c1a03bef8670c32b5b68ab2a7565956d43a90823be84f0a104229827dd99",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-661639",
	                        "09130103b04f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-661639 logs -n 25
E1009 19:41:53.000657  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-661639 logs -n 25: (1.894736198s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-271815 image list --format=json                                                                                                                                                                                               │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ pause   │ -p old-k8s-version-271815 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-678119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │                     │
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ stop    │ -p embed-certs-779570 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:41:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:41:47.484836  486363 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:41:47.485029  486363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:41:47.485059  486363 out.go:374] Setting ErrFile to fd 2...
	I1009 19:41:47.485079  486363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:41:47.485345  486363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:41:47.485806  486363 out.go:368] Setting JSON to false
	I1009 19:41:47.486812  486363 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8659,"bootTime":1760030249,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:41:47.486923  486363 start.go:141] virtualization:  
	I1009 19:41:47.491104  486363 out.go:179] * [newest-cni-532612] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:41:47.494608  486363 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:41:47.494683  486363 notify.go:220] Checking for updates...
	I1009 19:41:47.501046  486363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:41:47.504193  486363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:41:47.507422  486363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:41:47.510527  486363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:41:47.513537  486363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:41:47.517098  486363 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:41:47.517262  486363 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:41:47.539666  486363 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:41:47.539809  486363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:41:47.599902  486363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:41:47.590697097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:41:47.600012  486363 docker.go:318] overlay module found
	I1009 19:41:47.603195  486363 out.go:179] * Using the docker driver based on user configuration
	I1009 19:41:47.606077  486363 start.go:305] selected driver: docker
	I1009 19:41:47.606098  486363 start.go:925] validating driver "docker" against <nil>
	I1009 19:41:47.606112  486363 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:41:47.607003  486363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:41:47.663187  486363 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:41:47.653322842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:41:47.663343  486363 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1009 19:41:47.663369  486363 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1009 19:41:47.663606  486363 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 19:41:47.666662  486363 out.go:179] * Using Docker driver with root privileges
	I1009 19:41:47.669621  486363 cni.go:84] Creating CNI manager for ""
	I1009 19:41:47.669700  486363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:41:47.669715  486363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:41:47.669795  486363 start.go:349] cluster config:
	{Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:41:47.674683  486363 out.go:179] * Starting "newest-cni-532612" primary control-plane node in "newest-cni-532612" cluster
	I1009 19:41:47.677545  486363 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:41:47.680549  486363 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:41:47.683409  486363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:41:47.683490  486363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:41:47.683532  486363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:41:47.683556  486363 cache.go:64] Caching tarball of preloaded images
	I1009 19:41:47.683635  486363 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:41:47.683645  486363 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:41:47.683754  486363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/config.json ...
	I1009 19:41:47.683773  486363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/config.json: {Name:mk27135c018ae02186364508eeab031bce893fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:41:47.703763  486363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:41:47.703789  486363 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:41:47.703819  486363 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:41:47.703843  486363 start.go:360] acquireMachinesLock for newest-cni-532612: {Name:mk8a2332e6fb43f25fcf3e7ccbe060e53d52313a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:41:47.703967  486363 start.go:364] duration metric: took 103.689µs to acquireMachinesLock for "newest-cni-532612"
	I1009 19:41:47.704000  486363 start.go:93] Provisioning new machine with config: &{Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:41:47.704077  486363 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:41:47.707559  486363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:41:47.707857  486363 start.go:159] libmachine.API.Create for "newest-cni-532612" (driver="docker")
	I1009 19:41:47.707917  486363 client.go:168] LocalClient.Create starting
	I1009 19:41:47.708012  486363 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:41:47.708060  486363 main.go:141] libmachine: Decoding PEM data...
	I1009 19:41:47.708077  486363 main.go:141] libmachine: Parsing certificate...
	I1009 19:41:47.708137  486363 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:41:47.708159  486363 main.go:141] libmachine: Decoding PEM data...
	I1009 19:41:47.708173  486363 main.go:141] libmachine: Parsing certificate...
	I1009 19:41:47.708540  486363 cli_runner.go:164] Run: docker network inspect newest-cni-532612 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:41:47.725656  486363 cli_runner.go:211] docker network inspect newest-cni-532612 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:41:47.725736  486363 network_create.go:284] running [docker network inspect newest-cni-532612] to gather additional debugging logs...
	I1009 19:41:47.725753  486363 cli_runner.go:164] Run: docker network inspect newest-cni-532612
	W1009 19:41:47.741928  486363 cli_runner.go:211] docker network inspect newest-cni-532612 returned with exit code 1
	I1009 19:41:47.741960  486363 network_create.go:287] error running [docker network inspect newest-cni-532612]: docker network inspect newest-cni-532612: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-532612 not found
	I1009 19:41:47.741975  486363 network_create.go:289] output of [docker network inspect newest-cni-532612]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-532612 not found
	
	** /stderr **
	I1009 19:41:47.742083  486363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:41:47.759618  486363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:41:47.760036  486363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:41:47.760257  486363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:41:47.760558  486363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86fdc851eb8a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:2e:ad:fa:a7:05} reservation:<nil>}
	I1009 19:41:47.761003  486363 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a9870}
	I1009 19:41:47.761040  486363 network_create.go:124] attempt to create docker network newest-cni-532612 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 19:41:47.761095  486363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-532612 newest-cni-532612
	I1009 19:41:47.819881  486363 network_create.go:108] docker network newest-cni-532612 192.168.85.0/24 created
	I1009 19:41:47.819915  486363 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-532612" container
	I1009 19:41:47.820011  486363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:41:47.840609  486363 cli_runner.go:164] Run: docker volume create newest-cni-532612 --label name.minikube.sigs.k8s.io=newest-cni-532612 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:41:47.859475  486363 oci.go:103] Successfully created a docker volume newest-cni-532612
	I1009 19:41:47.859571  486363 cli_runner.go:164] Run: docker run --rm --name newest-cni-532612-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-532612 --entrypoint /usr/bin/test -v newest-cni-532612:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:41:48.464408  486363 oci.go:107] Successfully prepared a docker volume newest-cni-532612
	I1009 19:41:48.464481  486363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:41:48.464499  486363 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:41:48.464565  486363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-532612:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 09 19:41:40 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:40.630469702Z" level=info msg="Created container c429f4cd93dfd8741860bc27b18ac8e3f1076b62daec4dbacd3fbbd269325bf2: kube-system/coredns-66bc5c9577-xmz2b/coredns" id=80e31c1e-62bf-4c1d-843a-db8e6a64c234 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:40 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:40.631345717Z" level=info msg="Starting container: c429f4cd93dfd8741860bc27b18ac8e3f1076b62daec4dbacd3fbbd269325bf2" id=f89d58d4-d796-4430-9147-fb3e8749da86 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:41:40 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:40.633581116Z" level=info msg="Started container" PID=1743 containerID=c429f4cd93dfd8741860bc27b18ac8e3f1076b62daec4dbacd3fbbd269325bf2 description=kube-system/coredns-66bc5c9577-xmz2b/coredns id=f89d58d4-d796-4430-9147-fb3e8749da86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=27b129cc4937a6d5b56d91ee9ff6a0d3608003446410333afeaa2ecd33888b0e
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.654062761Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e84fe304-1ecd-41b3-9abc-f3a92ec88783 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.654190131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.667765588Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2 UID:502e5162-8647-4d5c-8bb6-483efa4658f3 NetNS:/var/run/netns/b7822797-571b-47c9-9955-78bb2d514de5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078a80}] Aliases:map[]}"
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.668088521Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.681176637Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2 UID:502e5162-8647-4d5c-8bb6-483efa4658f3 NetNS:/var/run/netns/b7822797-571b-47c9-9955-78bb2d514de5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078a80}] Aliases:map[]}"
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.682104944Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.693995064Z" level=info msg="Ran pod sandbox cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2 with infra container: default/busybox/POD" id=e84fe304-1ecd-41b3-9abc-f3a92ec88783 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.695781087Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0d56f827-1101-49b0-907b-b8f3905c4f5e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.696051843Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0d56f827-1101-49b0-907b-b8f3905c4f5e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.69616556Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0d56f827-1101-49b0-907b-b8f3905c4f5e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.697332868Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=223c4a74-4f95-4e47-a2c1-741e2f8dd7fb name=/runtime.v1.ImageService/PullImage
	Oct 09 19:41:43 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:43.699821415Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.733954994Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=223c4a74-4f95-4e47-a2c1-741e2f8dd7fb name=/runtime.v1.ImageService/PullImage
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.734657386Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63b41f85-ec95-4910-ad76-2807112147e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.736527964Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5420738-411c-4f49-b2bc-6cda0532c5bc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.744629108Z" level=info msg="Creating container: default/busybox/busybox" id=bb0a8a3b-8e25-423c-b710-9511c9d4dbf5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.745437807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.750272334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.750873457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.767257424Z" level=info msg="Created container e4494a0f516a8c56612e1f18152685e4daa89111077d4803f954a8f95eb7c5fe: default/busybox/busybox" id=bb0a8a3b-8e25-423c-b710-9511c9d4dbf5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.768387891Z" level=info msg="Starting container: e4494a0f516a8c56612e1f18152685e4daa89111077d4803f954a8f95eb7c5fe" id=46260af9-ce94-4b9f-8cb6-58d62a33a3cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:41:45 default-k8s-diff-port-661639 crio[840]: time="2025-10-09T19:41:45.770536939Z" level=info msg="Started container" PID=1794 containerID=e4494a0f516a8c56612e1f18152685e4daa89111077d4803f954a8f95eb7c5fe description=default/busybox/busybox id=46260af9-ce94-4b9f-8cb6-58d62a33a3cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e4494a0f516a8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   cc14784a709a8       busybox                                                default
	c429f4cd93dfd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   27b129cc4937a       coredns-66bc5c9577-xmz2b                               kube-system
	fa85bc18d6bad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   a95cf6bc1091d       storage-provisioner                                    kube-system
	7c301bc659c3b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   462e8d0f3a565       kube-proxy-8nqdl                                       kube-system
	7e0e3ee5cfc9c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   210fc562a1f70       kindnet-29w5k                                          kube-system
	e1d5afae822bc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   04abbc5f63340       kube-apiserver-default-k8s-diff-port-661639            kube-system
	8890eeb414a5f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   19ec6f46bdbe0       kube-scheduler-default-k8s-diff-port-661639            kube-system
	eca2d20624d44       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   75c9621fede14       kube-controller-manager-default-k8s-diff-port-661639   kube-system
	a8df12f9337a4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   fb94f4c07ac79       etcd-default-k8s-diff-port-661639                      kube-system
	
	
	==> coredns [c429f4cd93dfd8741860bc27b18ac8e3f1076b62daec4dbacd3fbbd269325bf2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42963 - 55965 "HINFO IN 6394006833875118462.8215754072600860551. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013642354s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-661639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-661639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=default-k8s-diff-port-661639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_40_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:40:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-661639
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:41:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:41:40 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:41:40 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:41:40 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:41:40 +0000   Thu, 09 Oct 2025 19:41:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-661639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b17db861226341a4b4c9364798f06564
	  System UUID:                7c98678a-bd01-4444-9c47-8681509e122a
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-xmz2b                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-661639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-29w5k                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-661639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-661639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-8nqdl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-661639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  72s (x8 over 73s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 73s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 73s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-661639 event: Registered Node default-k8s-diff-port-661639 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-661639 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a8df12f9337a402f0e32b3f1988595037e72a193765b1bf8bfe473103e448754] <==
	{"level":"warn","ts":"2025-10-09T19:40:46.900715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:46.924156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:46.950081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:46.989597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.015941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.048535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.073034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.119754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.147145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.173661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.210015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.261611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.288823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.325620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.348699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.389506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.463183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.470932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.485904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.508656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.551458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.578314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.626410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.650862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:40:47.805254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45360","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:53 up  2:24,  0 user,  load average: 2.81, 3.03, 2.43
	Linux default-k8s-diff-port-661639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e0e3ee5cfc9c8cbd8a54b84c19020c6bcaf5dc733f8b35535489b2c29398fb6] <==
	I1009 19:40:59.331731       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:40:59.331948       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 19:40:59.332096       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:40:59.332108       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:40:59.332121       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:40:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:40:59.531187       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:40:59.531206       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:40:59.531215       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:40:59.615218       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:41:29.532283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:41:29.614924       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:41:29.615056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:41:29.615196       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 19:41:30.932293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:41:30.932322       1 metrics.go:72] Registering metrics
	I1009 19:41:30.932388       1 controller.go:711] "Syncing nftables rules"
	I1009 19:41:39.536234       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:41:39.536284       1 main.go:301] handling current node
	I1009 19:41:49.531735       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:41:49.531772       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e1d5afae822bc690cecddd5eb7f20f704847bd1082f5364e85c0d728a31cd8a2] <==
	I1009 19:40:49.416948       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:40:49.416954       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:40:49.416960       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:40:49.434678       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:40:49.434798       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 19:40:49.441536       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:40:49.467844       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:40:49.474074       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:40:49.976164       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 19:40:49.982004       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:40:49.982431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:40:51.106053       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:40:51.199143       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:40:51.304887       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:40:51.315535       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1009 19:40:51.317280       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:40:51.323945       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:40:52.135997       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:40:52.588937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:40:52.613940       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:40:52.626972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:40:58.029870       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 19:40:58.160781       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:40:58.261592       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:40:58.265771       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [eca2d20624d44ed0a87afc5b738e5b6dea144feb96880f6bd3b98889e858344c] <==
	I1009 19:40:57.178977       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:40:57.179156       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:40:57.179734       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:40:57.186456       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-661639" podCIDRs=["10.244.0.0/24"]
	I1009 19:40:57.186652       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:40:57.204092       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 19:40:57.205368       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:40:57.206519       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:40:57.206567       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:40:57.206831       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:40:57.206851       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 19:40:57.207001       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:40:57.208092       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:40:57.208370       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:40:57.208428       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:40:57.211519       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:40:57.215303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 19:40:57.215690       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:40:57.215728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:40:57.216854       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:40:57.304211       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:40:57.304237       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:40:57.304245       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:40:57.313226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:41:42.163944       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7c301bc659c3b086c37fd76f249d7fcb715c19f2ce2a93d9b9cc03f5d7986ba4] <==
	I1009 19:40:59.660086       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:40:59.763374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:40:59.864454       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:40:59.864501       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 19:40:59.864584       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:40:59.889848       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:40:59.889910       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:40:59.894849       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:40:59.895280       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:40:59.895478       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:40:59.897015       1 config.go:200] "Starting service config controller"
	I1009 19:40:59.897080       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:40:59.897145       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:40:59.897173       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:40:59.897222       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:40:59.897249       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:40:59.900141       1 config.go:309] "Starting node config controller"
	I1009 19:40:59.901771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:40:59.901842       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:40:59.997741       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:40:59.997777       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:40:59.997833       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8890eeb414a5f6994827995b7e855eb16ff409840373efe21269a0defd077637] <==
	I1009 19:40:49.466872       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1009 19:40:49.480589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:40:49.516870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:40:49.517173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:40:49.517409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 19:40:49.517678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:40:49.517831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:40:49.517940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:40:49.518113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:40:49.518223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:40:49.518322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:40:49.518430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:40:49.518551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:40:49.518649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:40:49.518744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:40:49.518854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:40:49.518968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:40:49.519112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:40:49.519226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:40:49.519305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:40:50.478005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:40:50.479367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:40:50.543159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:40:50.618199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 19:40:53.270492       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: E1009 19:40:58.185890    1312 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-661639\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-661639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: E1009 19:40:58.187341    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-29w5k\" is forbidden: User \"system:node:default-k8s-diff-port-661639\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-661639' and this object" podUID="e71ef6ee-34c7-49c9-ae9f-439bc2897f22" pod="kube-system/kindnet-29w5k"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.210454    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e71ef6ee-34c7-49c9-ae9f-439bc2897f22-xtables-lock\") pod \"kindnet-29w5k\" (UID: \"e71ef6ee-34c7-49c9-ae9f-439bc2897f22\") " pod="kube-system/kindnet-29w5k"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.210683    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e71ef6ee-34c7-49c9-ae9f-439bc2897f22-cni-cfg\") pod \"kindnet-29w5k\" (UID: \"e71ef6ee-34c7-49c9-ae9f-439bc2897f22\") " pod="kube-system/kindnet-29w5k"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.210783    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e71ef6ee-34c7-49c9-ae9f-439bc2897f22-lib-modules\") pod \"kindnet-29w5k\" (UID: \"e71ef6ee-34c7-49c9-ae9f-439bc2897f22\") " pod="kube-system/kindnet-29w5k"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.210879    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkhj\" (UniqueName: \"kubernetes.io/projected/e71ef6ee-34c7-49c9-ae9f-439bc2897f22-kube-api-access-wtkhj\") pod \"kindnet-29w5k\" (UID: \"e71ef6ee-34c7-49c9-ae9f-439bc2897f22\") " pod="kube-system/kindnet-29w5k"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: E1009 19:40:58.213390    1312 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-661639\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-661639' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: E1009 19:40:58.213641    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-29w5k\" is forbidden: User \"system:node:default-k8s-diff-port-661639\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-661639' and this object" podUID="e71ef6ee-34c7-49c9-ae9f-439bc2897f22" pod="kube-system/kindnet-29w5k"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.312110    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10c09c57-34e7-4872-b609-9660c2a3777a-kube-proxy\") pod \"kube-proxy-8nqdl\" (UID: \"10c09c57-34e7-4872-b609-9660c2a3777a\") " pod="kube-system/kube-proxy-8nqdl"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.312202    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10c09c57-34e7-4872-b609-9660c2a3777a-xtables-lock\") pod \"kube-proxy-8nqdl\" (UID: \"10c09c57-34e7-4872-b609-9660c2a3777a\") " pod="kube-system/kube-proxy-8nqdl"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.312283    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2r6n\" (UniqueName: \"kubernetes.io/projected/10c09c57-34e7-4872-b609-9660c2a3777a-kube-api-access-c2r6n\") pod \"kube-proxy-8nqdl\" (UID: \"10c09c57-34e7-4872-b609-9660c2a3777a\") " pod="kube-system/kube-proxy-8nqdl"
	Oct 09 19:40:58 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:58.312319    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10c09c57-34e7-4872-b609-9660c2a3777a-lib-modules\") pod \"kube-proxy-8nqdl\" (UID: \"10c09c57-34e7-4872-b609-9660c2a3777a\") " pod="kube-system/kube-proxy-8nqdl"
	Oct 09 19:40:59 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:40:59.041836    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:40:59 default-k8s-diff-port-661639 kubelet[1312]: W1009 19:40:59.448205    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/crio-462e8d0f3a565eefbe1cc8d056f7a5cc0b98c95b85be969cd32b75c8dcedb2e6 WatchSource:0}: Error finding container 462e8d0f3a565eefbe1cc8d056f7a5cc0b98c95b85be969cd32b75c8dcedb2e6: Status 404 returned error can't find the container with id 462e8d0f3a565eefbe1cc8d056f7a5cc0b98c95b85be969cd32b75c8dcedb2e6
	Oct 09 19:41:00 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:00.168607    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8nqdl" podStartSLOduration=2.1685832290000002 podStartE2EDuration="2.168583229s" podCreationTimestamp="2025-10-09 19:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:41:00.055826913 +0000 UTC m=+7.517089130" watchObservedRunningTime="2025-10-09 19:41:00.168583229 +0000 UTC m=+7.629845446"
	Oct 09 19:41:02 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:02.823399    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-29w5k" podStartSLOduration=4.823378599 podStartE2EDuration="4.823378599s" podCreationTimestamp="2025-10-09 19:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:41:00.171408682 +0000 UTC m=+7.632670907" watchObservedRunningTime="2025-10-09 19:41:02.823378599 +0000 UTC m=+10.284640898"
	Oct 09 19:41:40 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:40.065168    1312 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 09 19:41:40 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:40.243712    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4f45d1d-93b4-496e-9086-b11b78d81810-config-volume\") pod \"coredns-66bc5c9577-xmz2b\" (UID: \"f4f45d1d-93b4-496e-9086-b11b78d81810\") " pod="kube-system/coredns-66bc5c9577-xmz2b"
	Oct 09 19:41:40 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:40.243826    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmzr\" (UniqueName: \"kubernetes.io/projected/f4f45d1d-93b4-496e-9086-b11b78d81810-kube-api-access-krmzr\") pod \"coredns-66bc5c9577-xmz2b\" (UID: \"f4f45d1d-93b4-496e-9086-b11b78d81810\") " pod="kube-system/coredns-66bc5c9577-xmz2b"
	Oct 09 19:41:40 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:40.243883    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f1ab3395-bfab-42a6-b507-b132a04dfe14-tmp\") pod \"storage-provisioner\" (UID: \"f1ab3395-bfab-42a6-b507-b132a04dfe14\") " pod="kube-system/storage-provisioner"
	Oct 09 19:41:40 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:40.243916    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48h9\" (UniqueName: \"kubernetes.io/projected/f1ab3395-bfab-42a6-b507-b132a04dfe14-kube-api-access-l48h9\") pod \"storage-provisioner\" (UID: \"f1ab3395-bfab-42a6-b507-b132a04dfe14\") " pod="kube-system/storage-provisioner"
	Oct 09 19:41:41 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:41.113714    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.113692922 podStartE2EDuration="41.113692922s" podCreationTimestamp="2025-10-09 19:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:41:41.095036313 +0000 UTC m=+48.556298538" watchObservedRunningTime="2025-10-09 19:41:41.113692922 +0000 UTC m=+48.574955139"
	Oct 09 19:41:43 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:43.343493    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xmz2b" podStartSLOduration=45.343470596 podStartE2EDuration="45.343470596s" podCreationTimestamp="2025-10-09 19:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:41:41.114048479 +0000 UTC m=+48.575310713" watchObservedRunningTime="2025-10-09 19:41:43.343470596 +0000 UTC m=+50.804732821"
	Oct 09 19:41:43 default-k8s-diff-port-661639 kubelet[1312]: I1009 19:41:43.465624    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2k9\" (UniqueName: \"kubernetes.io/projected/502e5162-8647-4d5c-8bb6-483efa4658f3-kube-api-access-wz2k9\") pod \"busybox\" (UID: \"502e5162-8647-4d5c-8bb6-483efa4658f3\") " pod="default/busybox"
	Oct 09 19:41:43 default-k8s-diff-port-661639 kubelet[1312]: W1009 19:41:43.688459    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/crio-cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2 WatchSource:0}: Error finding container cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2: Status 404 returned error can't find the container with id cc14784a709a84ca31d704da055b6bc26e5c274f4ee4d36c076747b5152d93e2
	
	
	==> storage-provisioner [fa85bc18d6bad3cbc9e9c8245597d8efaa0fc9e7a70e798c01dd04e1eb86c9ff] <==
	I1009 19:41:40.608576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:41:40.694933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:41:40.694996       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:41:40.724658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:40.731802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:41:40.732024       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:41:40.732230       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661639_13582073-5767-4eac-8f9f-b3fd9e5b4889!
	I1009 19:41:40.740067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36623463-521a-4e44-abb0-3a458f21ddd5", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-661639_13582073-5767-4eac-8f9f-b3fd9e5b4889 became leader
	W1009 19:41:40.741906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:40.749313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:41:40.833143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661639_13582073-5767-4eac-8f9f-b3fd9e5b4889!
	W1009 19:41:42.753131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:42.760476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:44.764832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:44.769768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:46.773523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:46.778849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:48.783104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:48.787893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:50.790894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:50.795931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:52.800474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:41:52.843876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (429.948454ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:42:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-532612
helpers_test.go:243: (dbg) docker inspect newest-cni-532612:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9",
	        "Created": "2025-10-09T19:41:53.109404869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:41:53.171178306Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/hosts",
	        "LogPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9-json.log",
	        "Name": "/newest-cni-532612",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-532612:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-532612",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9",
	                "LowerDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-532612",
	                "Source": "/var/lib/docker/volumes/newest-cni-532612/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-532612",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-532612",
	                "name.minikube.sigs.k8s.io": "newest-cni-532612",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6fdb5ef191494fcedf9deac3db66937aa7df039d0c6996e6b68078c096b0edd",
	            "SandboxKey": "/var/run/docker/netns/f6fdb5ef1914",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-532612": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:c2:36:5c:1e:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c633bac8bb6919b619e12e75f7760538611f5807d20349293f759d98cda4b7a",
	                    "EndpointID": "830bf58ff50b3ae7e7731e02f91e01d0ee923af045a2c526c28c515470e82bdf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-532612",
	                        "2d63c6e10b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-532612 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-532612 logs -n 25: (1.601101518s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-271815                                                                                                                                                                                                                     │ old-k8s-version-271815       │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ stop    │ -p no-preload-678119 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ addons  │ enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:38 UTC │ 09 Oct 25 19:39 UTC │
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ stop    │ -p embed-certs-779570 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-661639 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-661639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:42:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:42:06.889711  488960 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:42:06.890338  488960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:06.890388  488960 out.go:374] Setting ErrFile to fd 2...
	I1009 19:42:06.890420  488960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:06.890788  488960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:42:06.891304  488960 out.go:368] Setting JSON to false
	I1009 19:42:06.892401  488960 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8678,"bootTime":1760030249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:42:06.892499  488960 start.go:141] virtualization:  
	I1009 19:42:06.895657  488960 out.go:179] * [default-k8s-diff-port-661639] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:42:06.899603  488960 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:42:06.899676  488960 notify.go:220] Checking for updates...
	I1009 19:42:06.905790  488960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:42:06.908839  488960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:06.911689  488960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:42:06.914993  488960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:42:06.917894  488960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:42:06.921204  488960 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:06.921846  488960 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:42:06.967990  488960 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:42:06.968124  488960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:42:07.073715  488960 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:42:07.063288002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:42:07.073836  488960 docker.go:318] overlay module found
	I1009 19:42:07.077039  488960 out.go:179] * Using the docker driver based on existing profile
	I1009 19:42:07.079814  488960 start.go:305] selected driver: docker
	I1009 19:42:07.079847  488960 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:07.079960  488960 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:42:07.080871  488960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:42:07.179980  488960 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 19:42:07.161543441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:42:07.180312  488960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:42:07.180339  488960 cni.go:84] Creating CNI manager for ""
	I1009 19:42:07.180398  488960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:07.180435  488960 start.go:349] cluster config:
	{Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:07.183534  488960 out.go:179] * Starting "default-k8s-diff-port-661639" primary control-plane node in "default-k8s-diff-port-661639" cluster
	I1009 19:42:07.186310  488960 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:42:07.189231  488960 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:42:07.192001  488960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:07.192054  488960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:42:07.192069  488960 cache.go:64] Caching tarball of preloaded images
	I1009 19:42:07.192168  488960 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:42:07.192183  488960 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:42:07.192308  488960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/config.json ...
	I1009 19:42:07.192544  488960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:42:07.213624  488960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:42:07.213650  488960 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:42:07.213664  488960 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:42:07.213689  488960 start.go:360] acquireMachinesLock for default-k8s-diff-port-661639: {Name:mka8a696df6af39c9f3000a80f8e3a303a040dc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:42:07.213750  488960 start.go:364] duration metric: took 34.757µs to acquireMachinesLock for "default-k8s-diff-port-661639"
	I1009 19:42:07.213775  488960 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:42:07.213784  488960 fix.go:54] fixHost starting: 
	I1009 19:42:07.214041  488960 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:42:07.231644  488960 fix.go:112] recreateIfNeeded on default-k8s-diff-port-661639: state=Stopped err=<nil>
	W1009 19:42:07.231682  488960 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:42:03.122196  486363 out.go:252]   - Generating certificates and keys ...
	I1009 19:42:03.122321  486363 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:42:03.122416  486363 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:42:03.630012  486363 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:42:04.434377  486363 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:42:04.655723  486363 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:42:04.829167  486363 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:42:05.421454  486363 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:42:05.422033  486363 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-532612] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:42:05.626014  486363 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:42:05.626175  486363 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-532612] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:42:05.722297  486363 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:42:06.773492  486363 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:42:07.878814  486363 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:42:07.878895  486363 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:42:08.522987  486363 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:42:08.742141  486363 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:42:09.294879  486363 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:42:10.270269  486363 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:42:11.050479  486363 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:42:11.051105  486363 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:42:11.054396  486363 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:42:07.234851  488960 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-661639" ...
	I1009 19:42:07.234946  488960 cli_runner.go:164] Run: docker start default-k8s-diff-port-661639
	I1009 19:42:07.516053  488960 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:42:07.537648  488960 kic.go:430] container "default-k8s-diff-port-661639" state is running.
	I1009 19:42:07.538017  488960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:42:07.567085  488960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/config.json ...
	I1009 19:42:07.567332  488960 machine.go:93] provisionDockerMachine start ...
	I1009 19:42:07.567403  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:07.589181  488960 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:07.589499  488960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1009 19:42:07.589508  488960 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:42:07.594251  488960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41106->127.0.0.1:33460: read: connection reset by peer
	I1009 19:42:10.755434  488960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661639
	
	I1009 19:42:10.755463  488960 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-661639"
	I1009 19:42:10.755561  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:10.784122  488960 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:10.784476  488960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1009 19:42:10.784489  488960 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-661639 && echo "default-k8s-diff-port-661639" | sudo tee /etc/hostname
	I1009 19:42:10.953053  488960 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-661639
	
	I1009 19:42:10.953125  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:10.976380  488960 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:10.976699  488960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1009 19:42:10.976721  488960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-661639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-661639/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-661639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:42:11.139034  488960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:42:11.139063  488960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:42:11.139083  488960 ubuntu.go:190] setting up certificates
	I1009 19:42:11.139094  488960 provision.go:84] configureAuth start
	I1009 19:42:11.139157  488960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:42:11.169109  488960 provision.go:143] copyHostCerts
	I1009 19:42:11.169174  488960 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:42:11.169188  488960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:42:11.169264  488960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:42:11.169367  488960 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:42:11.169376  488960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:42:11.169403  488960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:42:11.169452  488960 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:42:11.169457  488960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:42:11.169481  488960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:42:11.169528  488960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-661639 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-661639 localhost minikube]
	I1009 19:42:11.544552  488960 provision.go:177] copyRemoteCerts
	I1009 19:42:11.544621  488960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:42:11.544671  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:11.562940  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:11.675147  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:42:11.695307  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 19:42:11.714830  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:42:11.734661  488960 provision.go:87] duration metric: took 595.553467ms to configureAuth
	I1009 19:42:11.734700  488960 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:42:11.734900  488960 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:11.735036  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:11.754874  488960 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:11.755172  488960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1009 19:42:11.755190  488960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:42:11.058440  486363 out.go:252]   - Booting up control plane ...
	I1009 19:42:11.058568  486363 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:42:11.058655  486363 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:42:11.060151  486363 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:42:11.093522  486363 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:42:11.093641  486363 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:42:11.102067  486363 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:42:11.104786  486363 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:42:11.104844  486363 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:42:11.282908  486363 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:42:11.283034  486363 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:42:12.279488  486363 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00144656s
	I1009 19:42:12.288745  486363 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:42:12.288849  486363 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 19:42:12.288943  486363 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:42:12.289025  486363 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:42:12.097991  488960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:42:12.098072  488960 machine.go:96] duration metric: took 4.530730583s to provisionDockerMachine
	I1009 19:42:12.098100  488960 start.go:293] postStartSetup for "default-k8s-diff-port-661639" (driver="docker")
	I1009 19:42:12.098182  488960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:42:12.098280  488960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:42:12.098354  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:12.120097  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:12.223443  488960 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:42:12.229783  488960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:42:12.229858  488960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:42:12.229885  488960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:42:12.229967  488960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:42:12.230073  488960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:42:12.230240  488960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:42:12.242562  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:42:12.273692  488960 start.go:296] duration metric: took 175.556747ms for postStartSetup
	I1009 19:42:12.273771  488960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:42:12.273817  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:12.295560  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:12.410690  488960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:42:12.416053  488960 fix.go:56] duration metric: took 5.202262278s for fixHost
	I1009 19:42:12.416080  488960 start.go:83] releasing machines lock for "default-k8s-diff-port-661639", held for 5.202317212s
	I1009 19:42:12.416149  488960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-661639
	I1009 19:42:12.442040  488960 ssh_runner.go:195] Run: cat /version.json
	I1009 19:42:12.442103  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:12.442390  488960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:42:12.442453  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:12.489191  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:12.492337  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:12.603222  488960 ssh_runner.go:195] Run: systemctl --version
	I1009 19:42:12.725057  488960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:42:12.779525  488960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:42:12.785691  488960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:42:12.785809  488960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:42:12.801475  488960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:42:12.801551  488960 start.go:495] detecting cgroup driver to use...
	I1009 19:42:12.801604  488960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:42:12.801677  488960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:42:12.826706  488960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:42:12.847859  488960 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:42:12.847984  488960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:42:12.875981  488960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:42:12.905229  488960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:42:13.119720  488960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:42:13.322670  488960 docker.go:234] disabling docker service ...
	I1009 19:42:13.322736  488960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:42:13.349185  488960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:42:13.363223  488960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:42:13.589443  488960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:42:13.782858  488960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:42:13.812338  488960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:42:13.843558  488960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:42:13.843624  488960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.860767  488960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:42:13.860836  488960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.869962  488960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.878868  488960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.887988  488960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:42:13.898257  488960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.912421  488960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.921099  488960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:13.930115  488960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:42:13.938121  488960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:42:13.946288  488960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:14.169849  488960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:42:14.399812  488960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:42:14.399906  488960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:42:14.403797  488960 start.go:563] Will wait 60s for crictl version
	I1009 19:42:14.403970  488960 ssh_runner.go:195] Run: which crictl
	I1009 19:42:14.414055  488960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:42:14.465620  488960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:42:14.465775  488960 ssh_runner.go:195] Run: crio --version
	I1009 19:42:14.518573  488960 ssh_runner.go:195] Run: crio --version
	I1009 19:42:14.565305  488960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:42:14.568159  488960 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-661639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:42:14.591924  488960 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:42:14.596157  488960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:42:14.608133  488960 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:42:14.608253  488960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:14.608306  488960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:42:14.676757  488960 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:42:14.676840  488960 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:42:14.676927  488960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:42:14.714404  488960 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:42:14.714428  488960 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:42:14.714438  488960 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1009 19:42:14.714541  488960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-661639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:42:14.714627  488960 ssh_runner.go:195] Run: crio config
	I1009 19:42:14.807808  488960 cni.go:84] Creating CNI manager for ""
	I1009 19:42:14.807890  488960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:14.807923  488960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:42:14.807986  488960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-661639 NodeName:default-k8s-diff-port-661639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:42:14.808152  488960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-661639"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:42:14.808258  488960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:42:14.820272  488960 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:42:14.820422  488960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:42:14.831331  488960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1009 19:42:14.846475  488960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:42:14.868457  488960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1009 19:42:14.891838  488960 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:42:14.898977  488960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:42:14.914856  488960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:15.132258  488960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:15.156055  488960 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639 for IP: 192.168.76.2
	I1009 19:42:15.156073  488960 certs.go:195] generating shared ca certs ...
	I1009 19:42:15.156109  488960 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:15.156240  488960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:42:15.156284  488960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:42:15.156292  488960 certs.go:257] generating profile certs ...
	I1009 19:42:15.156381  488960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.key
	I1009 19:42:15.156446  488960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key.6f8704fb
	I1009 19:42:15.156491  488960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key
	I1009 19:42:15.156599  488960 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:42:15.156629  488960 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:42:15.156637  488960 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:42:15.156661  488960 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:42:15.156681  488960 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:42:15.156705  488960 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:42:15.156748  488960 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:42:15.157662  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:42:15.200972  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:42:15.259642  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:42:15.299348  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:42:15.345324  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:42:15.387216  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:42:15.451608  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:42:15.488904  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:42:15.536707  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:42:15.584149  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:42:15.631304  488960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:42:15.677746  488960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:42:15.699664  488960 ssh_runner.go:195] Run: openssl version
	I1009 19:42:15.706904  488960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:42:15.733639  488960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:15.737552  488960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:15.737670  488960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:15.810323  488960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:42:15.818870  488960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:42:15.829336  488960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:42:15.836125  488960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:42:15.836240  488960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:42:15.881497  488960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:42:15.890429  488960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:42:15.900444  488960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:42:15.909725  488960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:42:15.909791  488960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:42:15.972475  488960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:42:15.984646  488960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:42:15.990929  488960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:42:16.070467  488960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:42:16.172705  488960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:42:16.306547  488960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:42:16.490589  488960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:42:16.733965  488960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:42:16.854955  488960 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-661639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-661639 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:16.855139  488960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:42:16.855248  488960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:42:16.931408  488960 cri.go:89] found id: "61a4c1355e9141371be19936476667f45feaf5cb8cb543e4b20e6dca262e451c"
	I1009 19:42:16.931431  488960 cri.go:89] found id: "768ba8a2857f78e85519fbea6febfc8bd4969620ca951c7d260ada4b7c79e0d0"
	I1009 19:42:16.931436  488960 cri.go:89] found id: "fdf381ba0047cf30aba12aa77f6c2451060e006b9680d6c86f071cb8a93a48aa"
	I1009 19:42:16.931440  488960 cri.go:89] found id: "098c492e4b1b7624dacdb34909738100a576f8ba91c34e3d4554ab1dd15c385a"
	I1009 19:42:16.931443  488960 cri.go:89] found id: ""
	I1009 19:42:16.931491  488960 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:42:16.966074  488960 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:42:16Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:42:16.966161  488960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:42:16.988168  488960 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:42:16.988189  488960 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:42:16.988260  488960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:42:17.014812  488960 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:42:17.015264  488960 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-661639" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:17.015373  488960 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-661639" cluster setting kubeconfig missing "default-k8s-diff-port-661639" context setting]
	I1009 19:42:17.015661  488960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:17.017082  488960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:42:17.040036  488960 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 19:42:17.040069  488960 kubeadm.go:601] duration metric: took 51.873743ms to restartPrimaryControlPlane
	I1009 19:42:17.040110  488960 kubeadm.go:402] duration metric: took 185.134145ms to StartCluster
	I1009 19:42:17.040131  488960 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:17.040190  488960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:17.040777  488960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:17.040972  488960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:42:17.041210  488960 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:17.041254  488960 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:42:17.041327  488960 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-661639"
	I1009 19:42:17.041341  488960 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-661639"
	W1009 19:42:17.041350  488960 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:42:17.041365  488960 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-661639"
	I1009 19:42:17.041376  488960 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-661639"
	I1009 19:42:17.041386  488960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-661639"
	I1009 19:42:17.041388  488960 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-661639"
	W1009 19:42:17.041395  488960 addons.go:247] addon dashboard should already be in state true
	I1009 19:42:17.041426  488960 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:42:17.041673  488960 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:42:17.041845  488960 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:42:17.041370  488960 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:42:17.042600  488960 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:42:17.045128  488960 out.go:179] * Verifying Kubernetes components...
	I1009 19:42:17.054269  488960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:17.095283  488960 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:42:17.098655  488960 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:42:17.100306  488960 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-661639"
	W1009 19:42:17.100327  488960 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:42:17.100352  488960 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:42:17.100759  488960 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:42:17.108742  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:42:17.108766  488960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:42:17.108838  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:17.122210  488960 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:42:18.158489  486363 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.873734101s
	I1009 19:42:20.328017  486363 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.043077617s
	I1009 19:42:21.287521  486363 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.001240684s
	I1009 19:42:21.306353  486363 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:42:21.323128  486363 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:42:21.340269  486363 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:42:21.340712  486363 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-532612 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:42:21.359084  486363 kubeadm.go:318] [bootstrap-token] Using token: pk19z4.4w2tkdwd8ghctgbt
	I1009 19:42:17.125155  488960 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:17.125179  488960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:42:17.125252  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:17.152544  488960 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:17.152564  488960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:42:17.152659  488960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:42:17.162226  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:17.186671  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:17.204137  488960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:42:17.597205  488960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:17.625707  488960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:17.643727  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:42:17.643754  488960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:42:17.685213  488960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:17.723498  488960 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-661639" to be "Ready" ...
	I1009 19:42:17.808263  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:42:17.808304  488960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:42:17.928265  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:42:17.928297  488960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:42:18.079806  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:42:18.079851  488960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:42:18.143684  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:42:18.143709  488960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:42:18.192251  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:42:18.192289  488960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:42:18.213006  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:42:18.213034  488960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:42:18.241904  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:42:18.241940  488960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:42:18.286787  488960 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:42:18.286814  488960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:42:18.310822  488960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:42:21.362166  486363 out.go:252]   - Configuring RBAC rules ...
	I1009 19:42:21.362296  486363 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:42:21.380069  486363 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:42:21.404452  486363 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:42:21.414087  486363 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:42:21.423467  486363 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:42:21.429413  486363 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:42:21.706899  486363 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:42:22.279279  486363 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:42:22.696731  486363 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:42:22.698302  486363 kubeadm.go:318] 
	I1009 19:42:22.698387  486363 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:42:22.698393  486363 kubeadm.go:318] 
	I1009 19:42:22.698473  486363 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:42:22.698478  486363 kubeadm.go:318] 
	I1009 19:42:22.698504  486363 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:42:22.699031  486363 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:42:22.699097  486363 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:42:22.699103  486363 kubeadm.go:318] 
	I1009 19:42:22.699159  486363 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:42:22.699164  486363 kubeadm.go:318] 
	I1009 19:42:22.699213  486363 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:42:22.699217  486363 kubeadm.go:318] 
	I1009 19:42:22.699271  486363 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:42:22.699348  486363 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:42:22.699419  486363 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:42:22.699423  486363 kubeadm.go:318] 
	I1009 19:42:22.699722  486363 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:42:22.699807  486363 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:42:22.699812  486363 kubeadm.go:318] 
	I1009 19:42:22.700107  486363 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pk19z4.4w2tkdwd8ghctgbt \
	I1009 19:42:22.700220  486363 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 \
	I1009 19:42:22.700428  486363 kubeadm.go:318] 	--control-plane 
	I1009 19:42:22.700438  486363 kubeadm.go:318] 
	I1009 19:42:22.700714  486363 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:42:22.700723  486363 kubeadm.go:318] 
	I1009 19:42:22.701003  486363 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pk19z4.4w2tkdwd8ghctgbt \
	I1009 19:42:22.701285  486363 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a98ca178ab4f20ea53d129cec61d8a495c31f2bd848fc883795202766e69bfa1 
	I1009 19:42:22.711629  486363 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:42:22.711876  486363 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:42:22.711986  486363 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:42:22.712003  486363 cni.go:84] Creating CNI manager for ""
	I1009 19:42:22.712010  486363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:22.715619  486363 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:42:24.028085  488960 node_ready.go:49] node "default-k8s-diff-port-661639" is "Ready"
	I1009 19:42:24.028111  488960 node_ready.go:38] duration metric: took 6.304574383s for node "default-k8s-diff-port-661639" to be "Ready" ...
	I1009 19:42:24.028125  488960 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:42:24.028188  488960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:42:24.211835  488960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.586080283s)
	I1009 19:42:25.937802  488960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.252551508s)
	I1009 19:42:25.937931  488960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.627068601s)
	I1009 19:42:25.938105  488960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.909883701s)
	I1009 19:42:25.938175  488960 api_server.go:72] duration metric: took 8.897121743s to wait for apiserver process to appear ...
	I1009 19:42:25.938186  488960 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:42:25.938204  488960 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1009 19:42:25.940682  488960 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-661639 addons enable metrics-server
	
	I1009 19:42:25.943786  488960 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1009 19:42:25.946630  488960 addons.go:514] duration metric: took 8.905355484s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1009 19:42:25.949241  488960 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:42:25.949265  488960 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:42:26.438918  488960 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1009 19:42:26.449073  488960 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1009 19:42:26.450234  488960 api_server.go:141] control plane version: v1.34.1
	I1009 19:42:26.450266  488960 api_server.go:131] duration metric: took 512.073309ms to wait for apiserver health ...
	I1009 19:42:26.450275  488960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:42:26.453503  488960 system_pods.go:59] 8 kube-system pods found
	I1009 19:42:26.453546  488960 system_pods.go:61] "coredns-66bc5c9577-xmz2b" [f4f45d1d-93b4-496e-9086-b11b78d81810] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:42:26.453558  488960 system_pods.go:61] "etcd-default-k8s-diff-port-661639" [225b314c-ea9f-40db-8b14-62a6c056e633] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:42:26.453564  488960 system_pods.go:61] "kindnet-29w5k" [e71ef6ee-34c7-49c9-ae9f-439bc2897f22] Running
	I1009 19:42:26.453571  488960 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-661639" [45e2389d-8003-436c-b13c-26caa975813a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:42:26.453578  488960 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-661639" [c58ee1e3-d00c-4fef-a060-e81d6bd74107] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:42:26.453590  488960 system_pods.go:61] "kube-proxy-8nqdl" [10c09c57-34e7-4872-b609-9660c2a3777a] Running
	I1009 19:42:26.453597  488960 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-661639" [be1f2919-d0ea-4bfa-87fe-2bb06132a5a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:42:26.453606  488960 system_pods.go:61] "storage-provisioner" [f1ab3395-bfab-42a6-b507-b132a04dfe14] Running
	I1009 19:42:26.453612  488960 system_pods.go:74] duration metric: took 3.330961ms to wait for pod list to return data ...
	I1009 19:42:26.453620  488960 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:42:26.456158  488960 default_sa.go:45] found service account: "default"
	I1009 19:42:26.456185  488960 default_sa.go:55] duration metric: took 2.55542ms for default service account to be created ...
	I1009 19:42:26.456196  488960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:42:26.459033  488960 system_pods.go:86] 8 kube-system pods found
	I1009 19:42:26.459066  488960 system_pods.go:89] "coredns-66bc5c9577-xmz2b" [f4f45d1d-93b4-496e-9086-b11b78d81810] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:42:26.459077  488960 system_pods.go:89] "etcd-default-k8s-diff-port-661639" [225b314c-ea9f-40db-8b14-62a6c056e633] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:42:26.459082  488960 system_pods.go:89] "kindnet-29w5k" [e71ef6ee-34c7-49c9-ae9f-439bc2897f22] Running
	I1009 19:42:26.459089  488960 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-661639" [45e2389d-8003-436c-b13c-26caa975813a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:42:26.459095  488960 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-661639" [c58ee1e3-d00c-4fef-a060-e81d6bd74107] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:42:26.459101  488960 system_pods.go:89] "kube-proxy-8nqdl" [10c09c57-34e7-4872-b609-9660c2a3777a] Running
	I1009 19:42:26.459112  488960 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-661639" [be1f2919-d0ea-4bfa-87fe-2bb06132a5a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:42:26.459119  488960 system_pods.go:89] "storage-provisioner" [f1ab3395-bfab-42a6-b507-b132a04dfe14] Running
	I1009 19:42:26.459127  488960 system_pods.go:126] duration metric: took 2.925771ms to wait for k8s-apps to be running ...
	I1009 19:42:26.459140  488960 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:42:26.459196  488960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:42:26.473226  488960 system_svc.go:56] duration metric: took 14.07766ms WaitForService to wait for kubelet
	I1009 19:42:26.473308  488960 kubeadm.go:586] duration metric: took 9.432303247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:42:26.473341  488960 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:42:26.477147  488960 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:42:26.477220  488960 node_conditions.go:123] node cpu capacity is 2
	I1009 19:42:26.477248  488960 node_conditions.go:105] duration metric: took 3.87391ms to run NodePressure ...
	I1009 19:42:26.477277  488960 start.go:241] waiting for startup goroutines ...
	I1009 19:42:26.477311  488960 start.go:246] waiting for cluster config update ...
	I1009 19:42:26.477342  488960 start.go:255] writing updated cluster config ...
	I1009 19:42:26.477667  488960 ssh_runner.go:195] Run: rm -f paused
	I1009 19:42:26.482840  488960 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:42:26.488717  488960 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xmz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:42:22.718526  486363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:42:22.726747  486363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:42:22.726765  486363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:42:22.782160  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:42:23.343770  486363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:42:23.343922  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:23.343990  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-532612 minikube.k8s.io/updated_at=2025_10_09T19_42_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50 minikube.k8s.io/name=newest-cni-532612 minikube.k8s.io/primary=true
	I1009 19:42:23.777493  486363 ops.go:34] apiserver oom_adj: -16
	I1009 19:42:23.777603  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:24.278452  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:24.777902  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:25.278586  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:25.778101  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:26.278080  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:26.778253  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:27.278422  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:27.777646  486363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:42:27.913410  486363 kubeadm.go:1113] duration metric: took 4.569532638s to wait for elevateKubeSystemPrivileges
	I1009 19:42:27.913456  486363 kubeadm.go:402] duration metric: took 25.072834138s to StartCluster
	I1009 19:42:27.913474  486363 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:27.913539  486363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:27.914561  486363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:27.914779  486363 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:42:27.914871  486363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:42:27.915126  486363 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:27.915162  486363 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:42:27.915220  486363 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-532612"
	I1009 19:42:27.915238  486363 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-532612"
	I1009 19:42:27.915261  486363 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:27.915750  486363 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:27.916359  486363 addons.go:69] Setting default-storageclass=true in profile "newest-cni-532612"
	I1009 19:42:27.916379  486363 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-532612"
	I1009 19:42:27.916671  486363 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:27.918198  486363 out.go:179] * Verifying Kubernetes components...
	I1009 19:42:27.923337  486363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:27.956500  486363 addons.go:238] Setting addon default-storageclass=true in "newest-cni-532612"
	I1009 19:42:27.956542  486363 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:27.956940  486363 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:27.984836  486363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:42:27.987725  486363 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:27.987748  486363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:42:27.987825  486363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:28.002504  486363 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:28.002543  486363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:42:28.002615  486363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:28.034269  486363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:28.037661  486363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:28.315520  486363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:28.315897  486363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:42:28.357997  486363 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:42:28.358155  486363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:42:28.411558  486363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:28.508174  486363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:29.139355  486363 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 19:42:29.140656  486363 api_server.go:72] duration metric: took 1.225842766s to wait for apiserver process to appear ...
	I1009 19:42:29.140683  486363 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:42:29.140697  486363 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:29.179002  486363 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:42:29.184343  486363 api_server.go:141] control plane version: v1.34.1
	I1009 19:42:29.184374  486363 api_server.go:131] duration metric: took 43.682819ms to wait for apiserver health ...
	I1009 19:42:29.184383  486363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:42:29.196340  486363 system_pods.go:59] 8 kube-system pods found
	I1009 19:42:29.196403  486363 system_pods.go:61] "coredns-66bc5c9577-b6x86" [1d407787-681c-46a5-a196-d6dfb8906b33] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:42:29.196418  486363 system_pods.go:61] "coredns-66bc5c9577-ptcc6" [9cb17d4b-1710-4794-919a-92018b128d23] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:42:29.196426  486363 system_pods.go:61] "etcd-newest-cni-532612" [5fa83761-6c4f-4748-be0e-55c99a748e7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:42:29.196432  486363 system_pods.go:61] "kindnet-l62gf" [1dff8975-257b-409c-85f7-7f11e9444ec0] Running
	I1009 19:42:29.196438  486363 system_pods.go:61] "kube-apiserver-newest-cni-532612" [26cb7bbd-ad4d-4bbf-a096-35c75aeb359c] Running
	I1009 19:42:29.196443  486363 system_pods.go:61] "kube-controller-manager-newest-cni-532612" [0e361d42-6133-4366-b817-141687d94c94] Running
	I1009 19:42:29.196452  486363 system_pods.go:61] "kube-proxy-bsq7j" [3415e29c-3f95-48f5-977e-ab18e00181ab] Running
	I1009 19:42:29.196458  486363 system_pods.go:61] "kube-scheduler-newest-cni-532612" [06cc763d-090b-497c-a0ce-b6276f27ed63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:42:29.196467  486363 system_pods.go:74] duration metric: took 12.078448ms to wait for pod list to return data ...
	I1009 19:42:29.196482  486363 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:42:29.203058  486363 default_sa.go:45] found service account: "default"
	I1009 19:42:29.203087  486363 default_sa.go:55] duration metric: took 6.597775ms for default service account to be created ...
	I1009 19:42:29.203101  486363 kubeadm.go:586] duration metric: took 1.288289538s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 19:42:29.203117  486363 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:42:29.205440  486363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:42:29.205474  486363 node_conditions.go:123] node cpu capacity is 2
	I1009 19:42:29.205487  486363 node_conditions.go:105] duration metric: took 2.363819ms to run NodePressure ...
	I1009 19:42:29.205499  486363 start.go:241] waiting for startup goroutines ...
	I1009 19:42:29.449875  486363 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1009 19:42:29.452911  486363 addons.go:514] duration metric: took 1.537726755s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 19:42:29.644397  486363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-532612" context rescaled to 1 replicas
	I1009 19:42:29.644498  486363 start.go:246] waiting for cluster config update ...
	I1009 19:42:29.644562  486363 start.go:255] writing updated cluster config ...
	I1009 19:42:29.644981  486363 ssh_runner.go:195] Run: rm -f paused
	I1009 19:42:29.748395  486363 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:42:29.751675  486363 out.go:179] * Done! kubectl is now configured to use "newest-cni-532612" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.536730708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.542828029Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3d1e81ec-3b81-4fc9-8342-ac214448bd97 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.551929046Z" level=info msg="Ran pod sandbox 9164d137dadd6e04cee8926c9079b6a3785eaaf191d834dd556057697cebfbc7 with infra container: kube-system/kube-proxy-bsq7j/POD" id=3d1e81ec-3b81-4fc9-8342-ac214448bd97 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.556750889Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=74f692b4-6591-4387-be3f-cc3b0a60847b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.56697279Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a94f02b5-06de-479e-a702-6329a9b5da58 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.569777427Z" level=info msg="Running pod sandbox: kube-system/kindnet-l62gf/POD" id=e8ac2481-3664-4cc7-b683-a3463e8c9124 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.569841608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.577529644Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e8ac2481-3664-4cc7-b683-a3463e8c9124 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.585378289Z" level=info msg="Ran pod sandbox b606b14b88c43f271eaf5a5d440e37c18d03b915b100d43fd836e3ed81be2516 with infra container: kube-system/kindnet-l62gf/POD" id=e8ac2481-3664-4cc7-b683-a3463e8c9124 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.587339034Z" level=info msg="Creating container: kube-system/kube-proxy-bsq7j/kube-proxy" id=98b1c4c1-327b-45b9-a222-679716ef04d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.587619686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.589705718Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2701f3ae-2ec6-4bc3-ac78-6a7f9cd46148 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.597028998Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=89ca6137-c0e5-45ca-8224-bd0125847f55 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.604052148Z" level=info msg="Creating container: kube-system/kindnet-l62gf/kindnet-cni" id=55c74ff6-52fb-4cbd-a771-00e292132522 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.604069125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.604714441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.607680852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.609682386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.610562725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.634920994Z" level=info msg="Created container df8d4bddd3fd0f06e435294ae19efea0d7c79b4c3f263a73d099efe028d82cc5: kube-system/kindnet-l62gf/kindnet-cni" id=55c74ff6-52fb-4cbd-a771-00e292132522 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.6380201Z" level=info msg="Starting container: df8d4bddd3fd0f06e435294ae19efea0d7c79b4c3f263a73d099efe028d82cc5" id=699f60c7-38e8-4a9f-9e2f-006eaa9c2e5a name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.64048587Z" level=info msg="Started container" PID=1417 containerID=df8d4bddd3fd0f06e435294ae19efea0d7c79b4c3f263a73d099efe028d82cc5 description=kube-system/kindnet-l62gf/kindnet-cni id=699f60c7-38e8-4a9f-9e2f-006eaa9c2e5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b606b14b88c43f271eaf5a5d440e37c18d03b915b100d43fd836e3ed81be2516
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.654226607Z" level=info msg="Created container 085d1f00080e1997b2f604952f3f6ba694a0c97cd2de34fc6277a2a68cc8789b: kube-system/kube-proxy-bsq7j/kube-proxy" id=98b1c4c1-327b-45b9-a222-679716ef04d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.655107692Z" level=info msg="Starting container: 085d1f00080e1997b2f604952f3f6ba694a0c97cd2de34fc6277a2a68cc8789b" id=0a7653ac-a9f1-414e-9680-b307f52630c0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:42:27 newest-cni-532612 crio[835]: time="2025-10-09T19:42:27.658381693Z" level=info msg="Started container" PID=1418 containerID=085d1f00080e1997b2f604952f3f6ba694a0c97cd2de34fc6277a2a68cc8789b description=kube-system/kube-proxy-bsq7j/kube-proxy id=0a7653ac-a9f1-414e-9680-b307f52630c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9164d137dadd6e04cee8926c9079b6a3785eaaf191d834dd556057697cebfbc7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	df8d4bddd3fd0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               0                   b606b14b88c43       kindnet-l62gf                               kube-system
	085d1f00080e1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                0                   9164d137dadd6       kube-proxy-bsq7j                            kube-system
	9d0eadd6e625f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago      Running             kube-controller-manager   0                   6d541cb9407e7       kube-controller-manager-newest-cni-532612   kube-system
	a777f9e10c119       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago      Running             kube-scheduler            0                   35cdb0ba5be5e       kube-scheduler-newest-cni-532612            kube-system
	3d7d268e5fe5b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago      Running             etcd                      0                   bf3cb472230fc       etcd-newest-cni-532612                      kube-system
	e409528c2480c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago      Running             kube-apiserver            0                   5603a4040c4b1       kube-apiserver-newest-cni-532612            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-532612
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-532612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=newest-cni-532612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_42_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:42:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-532612
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:42:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:42:22 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:42:22 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:42:22 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 09 Oct 2025 19:42:22 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-532612
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c061dd97d9034e51a0158833c99de144
	  System UUID:                599339a9-1ab0-448e-9b04-25350ae8a3fc
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-532612                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-l62gf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-532612             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-532612    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-bsq7j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-532612             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x8 over 20s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-532612 event: Registered Node newest-cni-532612 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:13] overlayfs: idmapped layers are currently not supported
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:42] overlayfs: idmapped layers are currently not supported
	[  +3.815530] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d7d268e5fe5bdb56dc623f9fcb752ca1f0f75238ab634c3d14f499759c65f5d] <==
	{"level":"warn","ts":"2025-10-09T19:42:15.218857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.295594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.391310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.446897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.550726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.613495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.676523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.784522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.785027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.836744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.898625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.937691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:15.969946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.032144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.066999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.105843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.158979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.227807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.265466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.313057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.371213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.402992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.482653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.578217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:16.753509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57014","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:31 up  2:25,  0 user,  load average: 4.30, 3.33, 2.56
	Linux newest-cni-532612 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df8d4bddd3fd0f06e435294ae19efea0d7c79b4c3f263a73d099efe028d82cc5] <==
	I1009 19:42:27.815333       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:42:27.815837       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:42:27.816037       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:42:27.816092       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:42:27.816128       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:42:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:42:28.105092       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:42:28.108983       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:42:28.109864       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:42:28.116096       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e409528c2480ccbfbfba8a97f7b38a77ae30fea4eb4818d99287cb98267ead53] <==
	I1009 19:42:19.002251       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:42:19.002323       1 policy_source.go:240] refreshing policies
	I1009 19:42:19.005060       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:42:19.065570       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:42:19.066654       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 19:42:19.109392       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:42:19.120252       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:42:19.125323       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:42:19.707293       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 19:42:19.715348       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:42:19.715367       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:42:20.949441       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:42:21.027261       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:42:21.097329       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:42:21.105132       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1009 19:42:21.106362       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:42:21.111683       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:42:21.719635       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:42:22.236860       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:42:22.273962       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:42:22.312253       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:42:26.854116       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:42:27.203004       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 19:42:27.869131       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:42:27.877410       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9d0eadd6e625f99e98063c2b1e8de0b01d86c99a2ace89cdb7509f831b31c18c] <==
	I1009 19:42:26.712619       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:42:26.715821       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:42:26.716995       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:42:26.719245       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:42:26.721460       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:42:26.741684       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:42:26.741976       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:42:26.744330       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:42:26.744431       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:42:26.744514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-532612"
	I1009 19:42:26.744563       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 19:42:26.745543       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:42:26.746664       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:42:26.747016       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:42:26.747814       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:42:26.748960       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:42:26.749256       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:42:26.749503       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:42:26.751551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:42:26.752424       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:42:26.752464       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 19:42:26.755780       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:42:26.766704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:42:26.766733       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:42:26.766741       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [085d1f00080e1997b2f604952f3f6ba694a0c97cd2de34fc6277a2a68cc8789b] <==
	I1009 19:42:27.737877       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:42:27.848645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:42:27.958523       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:42:27.988097       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:42:28.050788       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:42:28.299497       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:42:28.299562       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:42:28.318616       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:42:28.318964       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:42:28.318983       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:28.330786       1 config.go:200] "Starting service config controller"
	I1009 19:42:28.330803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:42:28.330819       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:42:28.330824       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:42:28.330846       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:42:28.330855       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:42:28.331471       1 config.go:309] "Starting node config controller"
	I1009 19:42:28.331490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:42:28.331497       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:42:28.431157       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:42:28.431225       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 19:42:28.431465       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a777f9e10c11966c43923a4aad7630ce3be06aa1f0433922e8a7236ae4889e9b] <==
	I1009 19:42:20.298203       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:20.300978       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:42:20.301363       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:20.314234       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:20.301382       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1009 19:42:20.316163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:42:20.342004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:42:20.364584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:42:20.365028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 19:42:20.365145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:42:20.365304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:42:20.365352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 19:42:20.365395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:42:20.365436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:42:20.365482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:42:20.365530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:42:20.365582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:42:20.365629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 19:42:20.365683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 19:42:20.365719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:42:20.366064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:42:20.374732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:42:20.374665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:42:20.374863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1009 19:42:21.414627       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:42:22 newest-cni-532612 kubelet[1306]: I1009 19:42:22.888933    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cc6a46714b2ac49fa71197bd11d48be-ca-certs\") pod \"kube-controller-manager-newest-cni-532612\" (UID: \"1cc6a46714b2ac49fa71197bd11d48be\") " pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:22 newest-cni-532612 kubelet[1306]: I1009 19:42:22.888951    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cc6a46714b2ac49fa71197bd11d48be-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-532612\" (UID: \"1cc6a46714b2ac49fa71197bd11d48be\") " pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:22 newest-cni-532612 kubelet[1306]: I1009 19:42:22.888969    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8175afbe34b45f10ba4cc064b77b567e-ca-certs\") pod \"kube-apiserver-newest-cni-532612\" (UID: \"8175afbe34b45f10ba4cc064b77b567e\") " pod="kube-system/kube-apiserver-newest-cni-532612"
	Oct 09 19:42:22 newest-cni-532612 kubelet[1306]: I1009 19:42:22.888986    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cc6a46714b2ac49fa71197bd11d48be-k8s-certs\") pod \"kube-controller-manager-newest-cni-532612\" (UID: \"1cc6a46714b2ac49fa71197bd11d48be\") " pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:22 newest-cni-532612 kubelet[1306]: I1009 19:42:22.923478    1306 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-532612"
	Oct 09 19:42:22 newest-cni-532612 kubelet[1306]: I1009 19:42:22.923607    1306 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-532612"
	Oct 09 19:42:23 newest-cni-532612 kubelet[1306]: I1009 19:42:23.436393    1306 apiserver.go:52] "Watching apiserver"
	Oct 09 19:42:23 newest-cni-532612 kubelet[1306]: I1009 19:42:23.484475    1306 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:42:23 newest-cni-532612 kubelet[1306]: I1009 19:42:23.734029    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-532612" podStartSLOduration=1.734008726 podStartE2EDuration="1.734008726s" podCreationTimestamp="2025-10-09 19:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:42:23.706841357 +0000 UTC m=+1.530288774" watchObservedRunningTime="2025-10-09 19:42:23.734008726 +0000 UTC m=+1.557456152"
	Oct 09 19:42:23 newest-cni-532612 kubelet[1306]: I1009 19:42:23.756489    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-532612" podStartSLOduration=1.756469109 podStartE2EDuration="1.756469109s" podCreationTimestamp="2025-10-09 19:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:42:23.735031392 +0000 UTC m=+1.558478818" watchObservedRunningTime="2025-10-09 19:42:23.756469109 +0000 UTC m=+1.579916535"
	Oct 09 19:42:23 newest-cni-532612 kubelet[1306]: I1009 19:42:23.809453    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-532612" podStartSLOduration=1.809434887 podStartE2EDuration="1.809434887s" podCreationTimestamp="2025-10-09 19:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:42:23.757470606 +0000 UTC m=+1.580918032" watchObservedRunningTime="2025-10-09 19:42:23.809434887 +0000 UTC m=+1.632882304"
	Oct 09 19:42:23 newest-cni-532612 kubelet[1306]: I1009 19:42:23.812287    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-532612" podStartSLOduration=1.812269744 podStartE2EDuration="1.812269744s" podCreationTimestamp="2025-10-09 19:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:42:23.809238265 +0000 UTC m=+1.632685691" watchObservedRunningTime="2025-10-09 19:42:23.812269744 +0000 UTC m=+1.635717194"
	Oct 09 19:42:26 newest-cni-532612 kubelet[1306]: I1009 19:42:26.791673    1306 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 09 19:42:26 newest-cni-532612 kubelet[1306]: I1009 19:42:26.792891    1306 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.339954    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3415e29c-3f95-48f5-977e-ab18e00181ab-kube-proxy\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340005    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3415e29c-3f95-48f5-977e-ab18e00181ab-lib-modules\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340026    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-lib-modules\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340054    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzt48\" (UniqueName: \"kubernetes.io/projected/1dff8975-257b-409c-85f7-7f11e9444ec0-kube-api-access-xzt48\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340079    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724ds\" (UniqueName: \"kubernetes.io/projected/3415e29c-3f95-48f5-977e-ab18e00181ab-kube-api-access-724ds\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340096    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-cni-cfg\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340112    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-xtables-lock\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.340133    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3415e29c-3f95-48f5-977e-ab18e00181ab-xtables-lock\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:42:27 newest-cni-532612 kubelet[1306]: I1009 19:42:27.458819    1306 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:42:28 newest-cni-532612 kubelet[1306]: I1009 19:42:28.719579    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l62gf" podStartSLOduration=1.719544159 podStartE2EDuration="1.719544159s" podCreationTimestamp="2025-10-09 19:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:42:27.687087411 +0000 UTC m=+5.510534837" watchObservedRunningTime="2025-10-09 19:42:28.719544159 +0000 UTC m=+6.542991585"
	Oct 09 19:42:31 newest-cni-532612 kubelet[1306]: I1009 19:42:31.203068    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bsq7j" podStartSLOduration=4.20304854 podStartE2EDuration="4.20304854s" podCreationTimestamp="2025-10-09 19:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:42:28.721098442 +0000 UTC m=+6.544545885" watchObservedRunningTime="2025-10-09 19:42:31.20304854 +0000 UTC m=+9.026495983"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-532612 -n newest-cni-532612
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-532612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ptcc6 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner: exit status 1 (135.724134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ptcc6" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-532612 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-532612 --alsologtostderr -v=1: exit status 80 (2.312712965s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-532612 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:43:02.311724  494251 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:43:02.311913  494251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:02.311925  494251 out.go:374] Setting ErrFile to fd 2...
	I1009 19:43:02.311931  494251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:02.312251  494251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:43:02.312586  494251 out.go:368] Setting JSON to false
	I1009 19:43:02.312606  494251 mustload.go:65] Loading cluster: newest-cni-532612
	I1009 19:43:02.313085  494251 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:02.313597  494251 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:43:02.335180  494251 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:43:02.335605  494251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:43:02.398715  494251 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:43:02.388211005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:43:02.399367  494251 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-532612 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 19:43:02.405136  494251 out.go:179] * Pausing node newest-cni-532612 ... 
	I1009 19:43:02.408336  494251 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:43:02.408696  494251 ssh_runner.go:195] Run: systemctl --version
	I1009 19:43:02.408749  494251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:43:02.430797  494251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:43:02.541070  494251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:02.554980  494251 pause.go:52] kubelet running: true
	I1009 19:43:02.555056  494251 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:02.825426  494251 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:02.825545  494251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:02.904057  494251 cri.go:89] found id: "bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856"
	I1009 19:43:02.904079  494251 cri.go:89] found id: "489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033"
	I1009 19:43:02.904086  494251 cri.go:89] found id: "9d1c411171d0c75660af47fb4916909e0a344da9c5b1ab9af1308b477612ba13"
	I1009 19:43:02.904090  494251 cri.go:89] found id: "07dc64a3c26fed0535c468419ca5c44104c3d30d68ebfb1c21ea7919703acf23"
	I1009 19:43:02.904094  494251 cri.go:89] found id: "9b7ab3972f70411906d2737adfb5a6be317ef4ac4e38127df45ba42ee748fb65"
	I1009 19:43:02.904136  494251 cri.go:89] found id: "07f42cc6b9c838cf62593bb8dbf355567bcb13a93ff3b637e6424beb09826678"
	I1009 19:43:02.904148  494251 cri.go:89] found id: ""
	I1009 19:43:02.904217  494251 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:02.924949  494251 retry.go:31] will retry after 136.091868ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:02Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:43:03.061288  494251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:03.086470  494251 pause.go:52] kubelet running: false
	I1009 19:43:03.086579  494251 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:03.264509  494251 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:03.264626  494251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:03.338026  494251 cri.go:89] found id: "bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856"
	I1009 19:43:03.338051  494251 cri.go:89] found id: "489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033"
	I1009 19:43:03.338057  494251 cri.go:89] found id: "9d1c411171d0c75660af47fb4916909e0a344da9c5b1ab9af1308b477612ba13"
	I1009 19:43:03.338061  494251 cri.go:89] found id: "07dc64a3c26fed0535c468419ca5c44104c3d30d68ebfb1c21ea7919703acf23"
	I1009 19:43:03.338065  494251 cri.go:89] found id: "9b7ab3972f70411906d2737adfb5a6be317ef4ac4e38127df45ba42ee748fb65"
	I1009 19:43:03.338068  494251 cri.go:89] found id: "07f42cc6b9c838cf62593bb8dbf355567bcb13a93ff3b637e6424beb09826678"
	I1009 19:43:03.338103  494251 cri.go:89] found id: ""
	I1009 19:43:03.338188  494251 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:03.349826  494251 retry.go:31] will retry after 195.833431ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:03Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:43:03.546253  494251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:03.559789  494251 pause.go:52] kubelet running: false
	I1009 19:43:03.559863  494251 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:03.710094  494251 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:03.710182  494251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:03.779366  494251 cri.go:89] found id: "bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856"
	I1009 19:43:03.779392  494251 cri.go:89] found id: "489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033"
	I1009 19:43:03.779397  494251 cri.go:89] found id: "9d1c411171d0c75660af47fb4916909e0a344da9c5b1ab9af1308b477612ba13"
	I1009 19:43:03.779402  494251 cri.go:89] found id: "07dc64a3c26fed0535c468419ca5c44104c3d30d68ebfb1c21ea7919703acf23"
	I1009 19:43:03.779405  494251 cri.go:89] found id: "9b7ab3972f70411906d2737adfb5a6be317ef4ac4e38127df45ba42ee748fb65"
	I1009 19:43:03.779409  494251 cri.go:89] found id: "07f42cc6b9c838cf62593bb8dbf355567bcb13a93ff3b637e6424beb09826678"
	I1009 19:43:03.779413  494251 cri.go:89] found id: ""
	I1009 19:43:03.779461  494251 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:03.791127  494251 retry.go:31] will retry after 505.591113ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:03Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:43:04.297776  494251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:04.311626  494251 pause.go:52] kubelet running: false
	I1009 19:43:04.311710  494251 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:04.464065  494251 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:04.464137  494251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:04.538881  494251 cri.go:89] found id: "bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856"
	I1009 19:43:04.538906  494251 cri.go:89] found id: "489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033"
	I1009 19:43:04.538911  494251 cri.go:89] found id: "9d1c411171d0c75660af47fb4916909e0a344da9c5b1ab9af1308b477612ba13"
	I1009 19:43:04.538915  494251 cri.go:89] found id: "07dc64a3c26fed0535c468419ca5c44104c3d30d68ebfb1c21ea7919703acf23"
	I1009 19:43:04.538919  494251 cri.go:89] found id: "9b7ab3972f70411906d2737adfb5a6be317ef4ac4e38127df45ba42ee748fb65"
	I1009 19:43:04.538933  494251 cri.go:89] found id: "07f42cc6b9c838cf62593bb8dbf355567bcb13a93ff3b637e6424beb09826678"
	I1009 19:43:04.538936  494251 cri.go:89] found id: ""
	I1009 19:43:04.538983  494251 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:04.553489  494251 out.go:203] 
	W1009 19:43:04.556317  494251 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:43:04.556338  494251 out.go:285] * 
	* 
	W1009 19:43:04.563367  494251 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:43:04.566348  494251 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-532612 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-532612
helpers_test.go:243: (dbg) docker inspect newest-cni-532612:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9",
	        "Created": "2025-10-09T19:41:53.109404869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492523,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:42:46.008459042Z",
	            "FinishedAt": "2025-10-09T19:42:44.596277056Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/hosts",
	        "LogPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9-json.log",
	        "Name": "/newest-cni-532612",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-532612:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-532612",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9",
	                "LowerDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-532612",
	                "Source": "/var/lib/docker/volumes/newest-cni-532612/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-532612",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-532612",
	                "name.minikube.sigs.k8s.io": "newest-cni-532612",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b492f4d844f59645ee5c823bc40b92eac483b23ec86dd6c7ae2b1102dd97570",
	            "SandboxKey": "/var/run/docker/netns/7b492f4d844f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-532612": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:7c:3d:7e:54:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c633bac8bb6919b619e12e75f7760538611f5807d20349293f759d98cda4b7a",
	                    "EndpointID": "44e47aba513fa6b52d6e4bc780c3a27f298dad2c36369df93955b7b62eae606a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-532612",
	                        "2d63c6e10b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612: exit status 2 (351.984119ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-532612 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-532612 logs -n 25: (1.154596874s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ stop    │ -p embed-certs-779570 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-661639 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-661639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-532612 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-532612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ image   │ newest-cni-532612 image list --format=json                                                                                                                                                                                                    │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-532612 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:42:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:42:45.708329  492391 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:42:45.708447  492391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:45.708459  492391 out.go:374] Setting ErrFile to fd 2...
	I1009 19:42:45.708464  492391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:45.708729  492391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:42:45.709116  492391 out.go:368] Setting JSON to false
	I1009 19:42:45.710097  492391 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8717,"bootTime":1760030249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:42:45.710211  492391 start.go:141] virtualization:  
	I1009 19:42:45.714442  492391 out.go:179] * [newest-cni-532612] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:42:45.724932  492391 notify.go:220] Checking for updates...
	I1009 19:42:45.727997  492391 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:42:45.730940  492391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:42:45.733968  492391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:45.736965  492391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:42:45.739914  492391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:42:45.742904  492391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:42:45.746379  492391 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:45.747006  492391 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:42:45.785011  492391 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:42:45.785154  492391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:42:45.842845  492391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:42:45.833231563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:42:45.842958  492391 docker.go:318] overlay module found
	I1009 19:42:45.846111  492391 out.go:179] * Using the docker driver based on existing profile
	I1009 19:42:45.849238  492391 start.go:305] selected driver: docker
	I1009 19:42:45.849259  492391 start.go:925] validating driver "docker" against &{Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:45.849360  492391 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:42:45.850088  492391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:42:45.904591  492391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:42:45.895725903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:42:45.904935  492391 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 19:42:45.904968  492391 cni.go:84] Creating CNI manager for ""
	I1009 19:42:45.905031  492391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:45.905073  492391 start.go:349] cluster config:
	{Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:45.908216  492391 out.go:179] * Starting "newest-cni-532612" primary control-plane node in "newest-cni-532612" cluster
	I1009 19:42:45.911129  492391 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:42:45.914019  492391 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:42:45.916905  492391 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:45.916959  492391 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:42:45.916973  492391 cache.go:64] Caching tarball of preloaded images
	I1009 19:42:45.916984  492391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:42:45.917068  492391 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:42:45.917078  492391 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:42:45.917200  492391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/config.json ...
	I1009 19:42:45.953742  492391 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:42:45.953762  492391 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:42:45.953782  492391 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:42:45.953807  492391 start.go:360] acquireMachinesLock for newest-cni-532612: {Name:mk8a2332e6fb43f25fcf3e7ccbe060e53d52313a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:42:45.953875  492391 start.go:364] duration metric: took 50.569µs to acquireMachinesLock for "newest-cni-532612"
	I1009 19:42:45.953896  492391 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:42:45.953901  492391 fix.go:54] fixHost starting: 
	I1009 19:42:45.954208  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:45.972657  492391 fix.go:112] recreateIfNeeded on newest-cni-532612: state=Stopped err=<nil>
	W1009 19:42:45.972696  492391 fix.go:138] unexpected machine state, will restart: <nil>
	W1009 19:42:42.994397  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:44.995156  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:45.976031  492391 out.go:252] * Restarting existing docker container for "newest-cni-532612" ...
	I1009 19:42:45.976142  492391 cli_runner.go:164] Run: docker start newest-cni-532612
	I1009 19:42:46.233633  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:46.253726  492391 kic.go:430] container "newest-cni-532612" state is running.
	I1009 19:42:46.254243  492391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-532612
	I1009 19:42:46.281222  492391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/config.json ...
	I1009 19:42:46.281476  492391 machine.go:93] provisionDockerMachine start ...
	I1009 19:42:46.281548  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:46.303539  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:46.303875  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:46.303885  492391 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:42:46.304563  492391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:42:49.449757  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-532612
	
	I1009 19:42:49.449790  492391 ubuntu.go:182] provisioning hostname "newest-cni-532612"
	I1009 19:42:49.449852  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:49.467838  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:49.468147  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:49.468164  492391 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-532612 && echo "newest-cni-532612" | sudo tee /etc/hostname
	I1009 19:42:49.635632  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-532612
	
	I1009 19:42:49.635782  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:49.654171  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:49.654486  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:49.654514  492391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-532612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-532612/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-532612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:42:49.802818  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:42:49.802904  492391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:42:49.802961  492391 ubuntu.go:190] setting up certificates
	I1009 19:42:49.802992  492391 provision.go:84] configureAuth start
	I1009 19:42:49.803097  492391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-532612
	I1009 19:42:49.822291  492391 provision.go:143] copyHostCerts
	I1009 19:42:49.822359  492391 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:42:49.822375  492391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:42:49.822453  492391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:42:49.822557  492391 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:42:49.822562  492391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:42:49.822590  492391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:42:49.822656  492391 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:42:49.822661  492391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:42:49.822687  492391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:42:49.822737  492391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.newest-cni-532612 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-532612]
	I1009 19:42:50.371017  492391 provision.go:177] copyRemoteCerts
	I1009 19:42:50.371094  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:42:50.371138  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:50.388825  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:50.492942  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:42:50.514956  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:42:50.535094  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:42:50.553097  492391 provision.go:87] duration metric: took 750.067145ms to configureAuth
	I1009 19:42:50.553180  492391 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:42:50.553401  492391 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:50.553515  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:50.575241  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:50.575620  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:50.575640  492391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1009 19:42:47.494679  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:49.494861  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:51.495603  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:50.891026  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:42:50.891052  492391 machine.go:96] duration metric: took 4.609566468s to provisionDockerMachine
	I1009 19:42:50.891063  492391 start.go:293] postStartSetup for "newest-cni-532612" (driver="docker")
	I1009 19:42:50.891073  492391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:42:50.891133  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:42:50.891193  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:50.909989  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.015166  492391 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:42:51.019311  492391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:42:51.019351  492391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:42:51.019363  492391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:42:51.019431  492391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:42:51.019524  492391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:42:51.019640  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:42:51.027907  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:42:51.046475  492391 start.go:296] duration metric: took 155.396897ms for postStartSetup
	I1009 19:42:51.046575  492391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:42:51.046667  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:51.067067  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.176570  492391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:42:51.185452  492391 fix.go:56] duration metric: took 5.231543327s for fixHost
	I1009 19:42:51.185476  492391 start.go:83] releasing machines lock for "newest-cni-532612", held for 5.231592271s
	I1009 19:42:51.185556  492391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-532612
	I1009 19:42:51.206781  492391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:42:51.206916  492391 ssh_runner.go:195] Run: cat /version.json
	I1009 19:42:51.206966  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:51.207071  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:51.226060  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.249639  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.419727  492391 ssh_runner.go:195] Run: systemctl --version
	I1009 19:42:51.426590  492391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:42:51.464897  492391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:42:51.469788  492391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:42:51.469905  492391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:42:51.477984  492391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:42:51.478008  492391 start.go:495] detecting cgroup driver to use...
	I1009 19:42:51.478057  492391 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:42:51.478206  492391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:42:51.496611  492391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:42:51.510943  492391 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:42:51.511065  492391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:42:51.528148  492391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:42:51.541666  492391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:42:51.657793  492391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:42:51.779525  492391 docker.go:234] disabling docker service ...
	I1009 19:42:51.779637  492391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:42:51.802448  492391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:42:51.815749  492391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:42:51.938449  492391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:42:52.059959  492391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:42:52.077555  492391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:42:52.093242  492391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:42:52.093360  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.102661  492391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:42:52.102731  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.111745  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.120624  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.129885  492391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:42:52.138800  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.148461  492391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.158553  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.167615  492391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:42:52.177318  492391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:42:52.184794  492391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:52.304507  492391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:42:52.430292  492391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:42:52.430376  492391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:42:52.434110  492391 start.go:563] Will wait 60s for crictl version
	I1009 19:42:52.434223  492391 ssh_runner.go:195] Run: which crictl
	I1009 19:42:52.437894  492391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:42:52.464003  492391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:42:52.464091  492391 ssh_runner.go:195] Run: crio --version
	I1009 19:42:52.497314  492391 ssh_runner.go:195] Run: crio --version
	I1009 19:42:52.531700  492391 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:42:52.534550  492391 cli_runner.go:164] Run: docker network inspect newest-cni-532612 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:42:52.550878  492391 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:42:52.554833  492391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:42:52.567977  492391 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1009 19:42:52.570814  492391 kubeadm.go:883] updating cluster {Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:42:52.570974  492391 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:52.571063  492391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:42:52.611126  492391 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:42:52.611150  492391 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:42:52.611207  492391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:42:52.642822  492391 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:42:52.642844  492391 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:42:52.642853  492391 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:42:52.642965  492391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-532612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:42:52.643055  492391 ssh_runner.go:195] Run: crio config
	I1009 19:42:52.692146  492391 cni.go:84] Creating CNI manager for ""
	I1009 19:42:52.692167  492391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:52.692184  492391 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1009 19:42:52.692227  492391 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-532612 NodeName:newest-cni-532612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:42:52.692384  492391 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-532612"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:42:52.692470  492391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:42:52.700328  492391 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:42:52.700453  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:42:52.707662  492391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:42:52.720556  492391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:42:52.733383  492391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 19:42:52.746082  492391 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:42:52.750040  492391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:42:52.760131  492391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:52.868065  492391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:52.884071  492391 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612 for IP: 192.168.85.2
	I1009 19:42:52.884090  492391 certs.go:195] generating shared ca certs ...
	I1009 19:42:52.884106  492391 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:52.884241  492391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:42:52.884285  492391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:42:52.884291  492391 certs.go:257] generating profile certs ...
	I1009 19:42:52.884368  492391 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/client.key
	I1009 19:42:52.884412  492391 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/apiserver.key.db6af006
	I1009 19:42:52.884454  492391 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/proxy-client.key
	I1009 19:42:52.884560  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:42:52.884587  492391 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:42:52.884595  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:42:52.884619  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:42:52.884640  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:42:52.884664  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:42:52.884703  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:42:52.885256  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:42:52.903203  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:42:52.920601  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:42:52.938340  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:42:52.956089  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:42:52.975646  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:42:52.995347  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:42:53.016097  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:42:53.034537  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:42:53.057198  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:42:53.091221  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:42:53.119761  492391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:42:53.136019  492391 ssh_runner.go:195] Run: openssl version
	I1009 19:42:53.142924  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:42:53.153356  492391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:42:53.157233  492391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:42:53.157350  492391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:42:53.202287  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:42:53.212563  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:42:53.223316  492391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:53.227656  492391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:53.227749  492391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:53.270585  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:42:53.280350  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:42:53.289527  492391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:42:53.293428  492391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:42:53.293524  492391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:42:53.335297  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:42:53.343923  492391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:42:53.348002  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:42:53.389325  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:42:53.436859  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:42:53.478855  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:42:53.523988  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:42:53.571261  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:42:53.637739  492391 kubeadm.go:400] StartCluster: {Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:53.637877  492391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:42:53.637954  492391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:42:53.694269  492391 cri.go:89] found id: ""
	I1009 19:42:53.694400  492391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:42:53.707369  492391 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:42:53.707448  492391 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:42:53.707547  492391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:42:53.718951  492391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:42:53.719553  492391 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-532612" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:53.719869  492391 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-532612" cluster setting kubeconfig missing "newest-cni-532612" context setting]
	I1009 19:42:53.720338  492391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:53.722634  492391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:42:53.733948  492391 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:42:53.734024  492391 kubeadm.go:601] duration metric: took 26.555655ms to restartPrimaryControlPlane
	I1009 19:42:53.734048  492391 kubeadm.go:402] duration metric: took 96.31827ms to StartCluster
	I1009 19:42:53.734078  492391 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:53.734175  492391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:53.735156  492391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:53.735398  492391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:42:53.735825  492391 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:53.735800  492391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:42:53.736074  492391 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-532612"
	I1009 19:42:53.736097  492391 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-532612"
	W1009 19:42:53.736111  492391 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:42:53.736170  492391 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:53.736125  492391 addons.go:69] Setting default-storageclass=true in profile "newest-cni-532612"
	I1009 19:42:53.736265  492391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-532612"
	I1009 19:42:53.736597  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.736777  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.736079  492391 addons.go:69] Setting dashboard=true in profile "newest-cni-532612"
	I1009 19:42:53.737183  492391 addons.go:238] Setting addon dashboard=true in "newest-cni-532612"
	W1009 19:42:53.737193  492391 addons.go:247] addon dashboard should already be in state true
	I1009 19:42:53.737215  492391 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:53.737622  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.744652  492391 out.go:179] * Verifying Kubernetes components...
	I1009 19:42:53.748210  492391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:53.788951  492391 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:42:53.793206  492391 addons.go:238] Setting addon default-storageclass=true in "newest-cni-532612"
	W1009 19:42:53.793231  492391 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:42:53.793267  492391 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:53.793750  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.794004  492391 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:53.794024  492391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:42:53.794073  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:53.826188  492391 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:42:53.826459  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:53.836084  492391 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:42:53.840201  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:42:53.840229  492391 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:42:53.840295  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:53.853946  492391 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:53.853969  492391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:42:53.854042  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:53.884119  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:53.892120  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:54.128536  492391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:54.160879  492391 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:42:54.160958  492391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:42:54.191882  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:42:54.191908  492391 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:42:54.204158  492391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:54.208980  492391 api_server.go:72] duration metric: took 473.51643ms to wait for apiserver process to appear ...
	I1009 19:42:54.209008  492391 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:42:54.209028  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:54.232444  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:42:54.232468  492391 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:42:54.247337  492391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:54.327908  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:42:54.327942  492391 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:42:54.423465  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:42:54.423490  492391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:42:54.496012  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:42:54.496038  492391 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:42:54.515783  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:42:54.515820  492391 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:42:54.546743  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:42:54.546769  492391 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:42:54.571347  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:42:54.571372  492391 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:42:54.593864  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:42:54.593901  492391 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:42:54.617393  492391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:42:53.995039  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:55.997468  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:59.020567  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:42:59.020591  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:42:59.020604  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:59.132682  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:42:59.132708  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:42:59.209911  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:59.299162  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:42:59.299241  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:42:59.709443  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:59.718089  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:42:59.718112  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:43:00.209745  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:43:00.231157  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:43:00.231189  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:43:00.709582  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:43:00.720395  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:43:00.720419  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:43:00.821399  492391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.617209904s)
	I1009 19:43:00.821457  492391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.574097403s)
	I1009 19:43:00.821820  492391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.204354364s)
	I1009 19:43:00.825130  492391 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-532612 addons enable metrics-server
	
	I1009 19:43:00.848954  492391 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1009 19:43:00.852320  492391 addons.go:514] duration metric: took 7.116506786s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1009 19:43:01.210017  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:43:01.219230  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:43:01.220487  492391 api_server.go:141] control plane version: v1.34.1
	I1009 19:43:01.220517  492391 api_server.go:131] duration metric: took 7.011500991s to wait for apiserver health ...
	I1009 19:43:01.220526  492391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:43:01.227813  492391 system_pods.go:59] 8 kube-system pods found
	I1009 19:43:01.227861  492391 system_pods.go:61] "coredns-66bc5c9577-ptcc6" [9cb17d4b-1710-4794-919a-92018b128d23] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:43:01.227871  492391 system_pods.go:61] "etcd-newest-cni-532612" [5fa83761-6c4f-4748-be0e-55c99a748e7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:43:01.227881  492391 system_pods.go:61] "kindnet-l62gf" [1dff8975-257b-409c-85f7-7f11e9444ec0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 19:43:01.227889  492391 system_pods.go:61] "kube-apiserver-newest-cni-532612" [26cb7bbd-ad4d-4bbf-a096-35c75aeb359c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:43:01.227899  492391 system_pods.go:61] "kube-controller-manager-newest-cni-532612" [0e361d42-6133-4366-b817-141687d94c94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:43:01.227906  492391 system_pods.go:61] "kube-proxy-bsq7j" [3415e29c-3f95-48f5-977e-ab18e00181ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 19:43:01.227918  492391 system_pods.go:61] "kube-scheduler-newest-cni-532612" [06cc763d-090b-497c-a0ce-b6276f27ed63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:43:01.227933  492391 system_pods.go:61] "storage-provisioner" [a572509f-c910-406c-8c63-e8b030ccb29c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:43:01.227945  492391 system_pods.go:74] duration metric: took 7.412332ms to wait for pod list to return data ...
	I1009 19:43:01.227956  492391 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:43:01.267167  492391 default_sa.go:45] found service account: "default"
	I1009 19:43:01.267195  492391 default_sa.go:55] duration metric: took 39.22937ms for default service account to be created ...
	I1009 19:43:01.267218  492391 kubeadm.go:586] duration metric: took 7.53174934s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 19:43:01.267242  492391 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:43:01.269851  492391 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:43:01.269892  492391 node_conditions.go:123] node cpu capacity is 2
	I1009 19:43:01.269904  492391 node_conditions.go:105] duration metric: took 2.656984ms to run NodePressure ...
	I1009 19:43:01.269915  492391 start.go:241] waiting for startup goroutines ...
	I1009 19:43:01.269923  492391 start.go:246] waiting for cluster config update ...
	I1009 19:43:01.269941  492391 start.go:255] writing updated cluster config ...
	I1009 19:43:01.270279  492391 ssh_runner.go:195] Run: rm -f paused
	I1009 19:43:01.383308  492391 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:43:01.386542  492391 out.go:179] * Done! kubectl is now configured to use "newest-cni-532612" cluster and "default" namespace by default
	W1009 19:42:58.495597  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:59.994465  488960 pod_ready.go:94] pod "coredns-66bc5c9577-xmz2b" is "Ready"
	I1009 19:42:59.994488  488960 pod_ready.go:86] duration metric: took 33.505743944s for pod "coredns-66bc5c9577-xmz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:42:59.999823  488960 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.015930  488960 pod_ready.go:94] pod "etcd-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:00.015961  488960 pod_ready.go:86] duration metric: took 16.107815ms for pod "etcd-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.032959  488960 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.066741  488960 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:00.066831  488960 pod_ready.go:86] duration metric: took 33.839765ms for pod "kube-apiserver-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.079729  488960 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.193537  488960 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:00.193629  488960 pod_ready.go:86] duration metric: took 113.80313ms for pod "kube-controller-manager-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.393413  488960 pod_ready.go:83] waiting for pod "kube-proxy-8nqdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.791997  488960 pod_ready.go:94] pod "kube-proxy-8nqdl" is "Ready"
	I1009 19:43:00.792029  488960 pod_ready.go:86] duration metric: took 398.581501ms for pod "kube-proxy-8nqdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.993142  488960 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:01.399505  488960 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:01.399528  488960 pod_ready.go:86] duration metric: took 406.36074ms for pod "kube-scheduler-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:01.399540  488960 pod_ready.go:40] duration metric: took 34.916667815s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:43:01.520271  488960 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:43:01.523934  488960 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-661639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.36122924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.36433985Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-bsq7j/POD" id=21dd228f-dfad-46f8-85c2-0d3f4eecab19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.364410596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.373717836Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=21dd228f-dfad-46f8-85c2-0d3f4eecab19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.378949137Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=579ce777-7000-48fd-99e1-b2535eb98247 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.410313171Z" level=info msg="Ran pod sandbox 9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c with infra container: kube-system/kindnet-l62gf/POD" id=579ce777-7000-48fd-99e1-b2535eb98247 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.419196824Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9830f89f-568c-4f00-933c-786e0afc950e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.420817447Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=42394fc0-5143-45cc-bc95-0b1b766b11cb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.422385885Z" level=info msg="Creating container: kube-system/kindnet-l62gf/kindnet-cni" id=fa52d9e4-4dac-4684-b17f-dbd5290cb94e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.422700924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.448153978Z" level=info msg="Ran pod sandbox 280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893 with infra container: kube-system/kube-proxy-bsq7j/POD" id=21dd228f-dfad-46f8-85c2-0d3f4eecab19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.448270328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.450744902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.449718411Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=86098891-8d2b-43b6-b093-6747ba16b40e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.454879615Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fa2a0edf-b579-48b9-98a2-157702f1e54c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.459479597Z" level=info msg="Creating container: kube-system/kube-proxy-bsq7j/kube-proxy" id=5b32b1e4-b655-43c2-8bbe-c02bd1a620f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.466099461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.49006704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.490726059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.526694482Z" level=info msg="Created container 489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033: kube-system/kindnet-l62gf/kindnet-cni" id=fa52d9e4-4dac-4684-b17f-dbd5290cb94e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.527716091Z" level=info msg="Starting container: 489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033" id=dce47ca9-de2e-4e24-86cc-7cc0da476d1e name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.529885307Z" level=info msg="Started container" PID=1052 containerID=489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033 description=kube-system/kindnet-l62gf/kindnet-cni id=dce47ca9-de2e-4e24-86cc-7cc0da476d1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.538440332Z" level=info msg="Created container bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856: kube-system/kube-proxy-bsq7j/kube-proxy" id=5b32b1e4-b655-43c2-8bbe-c02bd1a620f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.539223825Z" level=info msg="Starting container: bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856" id=ea70dff8-d517-4b51-bd18-c4d58011ef37 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.55056423Z" level=info msg="Started container" PID=1053 containerID=bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856 description=kube-system/kube-proxy-bsq7j/kube-proxy id=ea70dff8-d517-4b51-bd18-c4d58011ef37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bfac8d577d5ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   280a48ed2cb0c       kube-proxy-bsq7j                            kube-system
	489b35195754a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   9a14115a86750       kindnet-l62gf                               kube-system
	9d1c411171d0c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   db98b3cbb350e       kube-controller-manager-newest-cni-532612   kube-system
	07dc64a3c26fe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   7dc7d2929683e       kube-scheduler-newest-cni-532612            kube-system
	9b7ab3972f704       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   5f3b88f90a3c6       kube-apiserver-newest-cni-532612            kube-system
	07f42cc6b9c83       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   78f1d16d447ac       etcd-newest-cni-532612                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-532612
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-532612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=newest-cni-532612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_42_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:42:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-532612
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-532612
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fb19f8f22e48c0983b2521c34667f3
	  System UUID:                599339a9-1ab0-448e-9b04-25350ae8a3fc
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-532612                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-l62gf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      38s
	  kube-system                 kube-apiserver-newest-cni-532612             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-532612    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-bsq7j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-newest-cni-532612             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 37s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 54s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     43s                kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s                kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s                kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 43s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           39s                node-controller  Node newest-cni-532612 event: Registered Node newest-cni-532612 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-532612 event: Registered Node newest-cni-532612 in Controller
	
	
	==> dmesg <==
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:42] overlayfs: idmapped layers are currently not supported
	[  +3.815530] overlayfs: idmapped layers are currently not supported
	[ +37.476110] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [07f42cc6b9c838cf62593bb8dbf355567bcb13a93ff3b637e6424beb09826678] <==
	{"level":"warn","ts":"2025-10-09T19:42:57.914227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.933279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.959546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.972458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.992531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.008508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.030198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.048421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.060741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.076794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.091696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.106674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.123062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.142004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.157163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.174961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.191186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.212975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.222909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.238848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.255921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.304667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.319308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.338227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.387903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56182","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:05 up  2:25,  0 user,  load average: 4.01, 3.36, 2.60
	Linux newest-cni-532612 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033] <==
	I1009 19:43:00.715116       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:43:00.715335       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:43:00.715435       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:43:00.715446       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:43:00.715455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:43:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:43:00.836124       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:43:00.836197       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:43:00.836230       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:43:00.836919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [9b7ab3972f70411906d2737adfb5a6be317ef4ac4e38127df45ba42ee748fb65] <==
	I1009 19:42:59.400486       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:42:59.406340       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:42:59.406375       1 policy_source.go:240] refreshing policies
	I1009 19:42:59.411043       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:42:59.420082       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:42:59.420152       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:42:59.455609       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:42:59.474856       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:42:59.475081       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:42:59.475141       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:42:59.475154       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:42:59.475785       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:42:59.488182       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:42:59.488696       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:42:59.962443       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:42:59.989909       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:43:00.146913       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:43:00.245495       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:43:00.335909       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:43:00.414352       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:43:00.592932       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.250.126"}
	I1009 19:43:00.635680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.45.181"}
	I1009 19:43:02.873011       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:43:02.973021       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:43:03.083189       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9d1c411171d0c75660af47fb4916909e0a344da9c5b1ab9af1308b477612ba13] <==
	I1009 19:43:02.585168       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:43:02.585930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:02.585953       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:43:02.585961       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:43:02.587642       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:43:02.591392       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:43:02.595077       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:43:02.595235       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:43:02.597378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:43:02.612091       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:43:02.613691       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:43:02.616003       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:43:02.616202       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 19:43:02.616577       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:43:02.616619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:43:02.616673       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 19:43:02.616719       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:43:02.618507       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:43:02.619988       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:43:02.631424       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:43:02.632959       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:43:02.633068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:02.635844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:43:02.642491       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:43:02.646268       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856] <==
	I1009 19:43:00.763993       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:43:00.900018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:43:01.003788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:43:01.003822       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:43:01.003914       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:43:01.024810       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:43:01.024886       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:43:01.028566       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:43:01.028907       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:43:01.028932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:01.030275       1 config.go:200] "Starting service config controller"
	I1009 19:43:01.030347       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:43:01.030735       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:43:01.030785       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:43:01.031494       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:43:01.034293       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:43:01.031965       1 config.go:309] "Starting node config controller"
	I1009 19:43:01.034381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:43:01.034414       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:43:01.131502       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:43:01.134809       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:43:01.134821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [07dc64a3c26fed0535c468419ca5c44104c3d30d68ebfb1c21ea7919703acf23] <==
	I1009 19:42:56.813913       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:42:59.011677       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:42:59.011722       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:42:59.011733       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:42:59.011740       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:42:59.400671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:42:59.400699       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:59.409246       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:42:59.409387       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:59.409407       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:59.409424       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:42:59.510731       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.158259     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.435448     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-532612\" already exists" pod="kube-system/etcd-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.435485     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.469709     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-532612\" already exists" pod="kube-system/kube-apiserver-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.469758     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.505309     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.505438     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.505466     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.507559     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.522248     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-532612\" already exists" pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.522308     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.549993     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-532612\" already exists" pod="kube-system/kube-scheduler-newest-cni-532612"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.018977     726 apiserver.go:52] "Watching apiserver"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.042475     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.102565     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3415e29c-3f95-48f5-977e-ab18e00181ab-lib-modules\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.102906     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-cni-cfg\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.103107     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-xtables-lock\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.103677     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3415e29c-3f95-48f5-977e-ab18e00181ab-xtables-lock\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.105200     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-lib-modules\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.165021     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: W1009 19:43:00.397852     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/crio-9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c WatchSource:0}: Error finding container 9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c: Status 404 returned error can't find the container with id 9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: W1009 19:43:00.410202     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/crio-280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893 WatchSource:0}: Error finding container 280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893: Status 404 returned error can't find the container with id 280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893
	Oct 09 19:43:02 newest-cni-532612 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:43:02 newest-cni-532612 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:43:02 newest-cni-532612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-532612 -n newest-cni-532612
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-532612 -n newest-cni-532612: exit status 2 (353.313688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-532612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g: exit status 1 (85.688884ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ptcc6" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nd9vc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xxc5g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-532612
helpers_test.go:243: (dbg) docker inspect newest-cni-532612:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9",
	        "Created": "2025-10-09T19:41:53.109404869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492523,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:42:46.008459042Z",
	            "FinishedAt": "2025-10-09T19:42:44.596277056Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/hosts",
	        "LogPath": "/var/lib/docker/containers/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9-json.log",
	        "Name": "/newest-cni-532612",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-532612:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-532612",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9",
	                "LowerDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51871730127e6a73114ae243ba24380b9c6cbc8558b8131db14ae87a8e7647ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-532612",
	                "Source": "/var/lib/docker/volumes/newest-cni-532612/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-532612",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-532612",
	                "name.minikube.sigs.k8s.io": "newest-cni-532612",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b492f4d844f59645ee5c823bc40b92eac483b23ec86dd6c7ae2b1102dd97570",
	            "SandboxKey": "/var/run/docker/netns/7b492f4d844f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-532612": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:7c:3d:7e:54:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c633bac8bb6919b619e12e75f7760538611f5807d20349293f759d98cda4b7a",
	                    "EndpointID": "44e47aba513fa6b52d6e4bc780c3a27f298dad2c36369df93955b7b62eae606a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-532612",
	                        "2d63c6e10b44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612: exit status 2 (350.068742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-532612 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-532612 logs -n 25: (1.10817292s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-678119 image list --format=json                                                                                                                                                                                                    │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ pause   │ -p no-preload-678119 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-779570 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │                     │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ stop    │ -p embed-certs-779570 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-661639 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-661639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-532612 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-532612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ image   │ newest-cni-532612 image list --format=json                                                                                                                                                                                                    │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-532612 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:42:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:42:45.708329  492391 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:42:45.708447  492391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:45.708459  492391 out.go:374] Setting ErrFile to fd 2...
	I1009 19:42:45.708464  492391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:45.708729  492391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:42:45.709116  492391 out.go:368] Setting JSON to false
	I1009 19:42:45.710097  492391 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8717,"bootTime":1760030249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:42:45.710211  492391 start.go:141] virtualization:  
	I1009 19:42:45.714442  492391 out.go:179] * [newest-cni-532612] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:42:45.724932  492391 notify.go:220] Checking for updates...
	I1009 19:42:45.727997  492391 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:42:45.730940  492391 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:42:45.733968  492391 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:45.736965  492391 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:42:45.739914  492391 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:42:45.742904  492391 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:42:45.746379  492391 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:45.747006  492391 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:42:45.785011  492391 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:42:45.785154  492391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:42:45.842845  492391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:42:45.833231563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:42:45.842958  492391 docker.go:318] overlay module found
	I1009 19:42:45.846111  492391 out.go:179] * Using the docker driver based on existing profile
	I1009 19:42:45.849238  492391 start.go:305] selected driver: docker
	I1009 19:42:45.849259  492391 start.go:925] validating driver "docker" against &{Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:45.849360  492391 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:42:45.850088  492391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:42:45.904591  492391 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:42:45.895725903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:42:45.904935  492391 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 19:42:45.904968  492391 cni.go:84] Creating CNI manager for ""
	I1009 19:42:45.905031  492391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:45.905073  492391 start.go:349] cluster config:
	{Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:45.908216  492391 out.go:179] * Starting "newest-cni-532612" primary control-plane node in "newest-cni-532612" cluster
	I1009 19:42:45.911129  492391 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:42:45.914019  492391 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:42:45.916905  492391 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:45.916959  492391 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:42:45.916973  492391 cache.go:64] Caching tarball of preloaded images
	I1009 19:42:45.916984  492391 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:42:45.917068  492391 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:42:45.917078  492391 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:42:45.917200  492391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/config.json ...
	I1009 19:42:45.953742  492391 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:42:45.953762  492391 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:42:45.953782  492391 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:42:45.953807  492391 start.go:360] acquireMachinesLock for newest-cni-532612: {Name:mk8a2332e6fb43f25fcf3e7ccbe060e53d52313a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:42:45.953875  492391 start.go:364] duration metric: took 50.569µs to acquireMachinesLock for "newest-cni-532612"
	I1009 19:42:45.953896  492391 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:42:45.953901  492391 fix.go:54] fixHost starting: 
	I1009 19:42:45.954208  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:45.972657  492391 fix.go:112] recreateIfNeeded on newest-cni-532612: state=Stopped err=<nil>
	W1009 19:42:45.972696  492391 fix.go:138] unexpected machine state, will restart: <nil>
	W1009 19:42:42.994397  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:44.995156  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:45.976031  492391 out.go:252] * Restarting existing docker container for "newest-cni-532612" ...
	I1009 19:42:45.976142  492391 cli_runner.go:164] Run: docker start newest-cni-532612
	I1009 19:42:46.233633  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:46.253726  492391 kic.go:430] container "newest-cni-532612" state is running.
	I1009 19:42:46.254243  492391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-532612
	I1009 19:42:46.281222  492391 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/config.json ...
	I1009 19:42:46.281476  492391 machine.go:93] provisionDockerMachine start ...
	I1009 19:42:46.281548  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:46.303539  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:46.303875  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:46.303885  492391 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:42:46.304563  492391 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:42:49.449757  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-532612
	
	I1009 19:42:49.449790  492391 ubuntu.go:182] provisioning hostname "newest-cni-532612"
	I1009 19:42:49.449852  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:49.467838  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:49.468147  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:49.468164  492391 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-532612 && echo "newest-cni-532612" | sudo tee /etc/hostname
	I1009 19:42:49.635632  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-532612
	
	I1009 19:42:49.635782  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:49.654171  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:49.654486  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:49.654514  492391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-532612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-532612/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-532612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:42:49.802818  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:42:49.802904  492391 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-284447/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-284447/.minikube}
	I1009 19:42:49.802961  492391 ubuntu.go:190] setting up certificates
	I1009 19:42:49.802992  492391 provision.go:84] configureAuth start
	I1009 19:42:49.803097  492391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-532612
	I1009 19:42:49.822291  492391 provision.go:143] copyHostCerts
	I1009 19:42:49.822359  492391 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem, removing ...
	I1009 19:42:49.822375  492391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem
	I1009 19:42:49.822453  492391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/ca.pem (1078 bytes)
	I1009 19:42:49.822557  492391 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem, removing ...
	I1009 19:42:49.822562  492391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem
	I1009 19:42:49.822590  492391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/cert.pem (1123 bytes)
	I1009 19:42:49.822656  492391 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem, removing ...
	I1009 19:42:49.822661  492391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem
	I1009 19:42:49.822687  492391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-284447/.minikube/key.pem (1675 bytes)
	I1009 19:42:49.822737  492391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem org=jenkins.newest-cni-532612 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-532612]
	I1009 19:42:50.371017  492391 provision.go:177] copyRemoteCerts
	I1009 19:42:50.371094  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:42:50.371138  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:50.388825  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:50.492942  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:42:50.514956  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:42:50.535094  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:42:50.553097  492391 provision.go:87] duration metric: took 750.067145ms to configureAuth
	I1009 19:42:50.553180  492391 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:42:50.553401  492391 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:50.553515  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:50.575241  492391 main.go:141] libmachine: Using SSH client type: native
	I1009 19:42:50.575620  492391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1009 19:42:50.575640  492391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1009 19:42:47.494679  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:49.494861  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:51.495603  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:50.891026  492391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:42:50.891052  492391 machine.go:96] duration metric: took 4.609566468s to provisionDockerMachine
	I1009 19:42:50.891063  492391 start.go:293] postStartSetup for "newest-cni-532612" (driver="docker")
	I1009 19:42:50.891073  492391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:42:50.891133  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:42:50.891193  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:50.909989  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.015166  492391 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:42:51.019311  492391 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:42:51.019351  492391 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:42:51.019363  492391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/addons for local assets ...
	I1009 19:42:51.019431  492391 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-284447/.minikube/files for local assets ...
	I1009 19:42:51.019524  492391 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem -> 2863092.pem in /etc/ssl/certs
	I1009 19:42:51.019640  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:42:51.027907  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:42:51.046475  492391 start.go:296] duration metric: took 155.396897ms for postStartSetup
	I1009 19:42:51.046575  492391 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:42:51.046667  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:51.067067  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.176570  492391 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:42:51.185452  492391 fix.go:56] duration metric: took 5.231543327s for fixHost
	I1009 19:42:51.185476  492391 start.go:83] releasing machines lock for "newest-cni-532612", held for 5.231592271s
	I1009 19:42:51.185556  492391 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-532612
	I1009 19:42:51.206781  492391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:42:51.206916  492391 ssh_runner.go:195] Run: cat /version.json
	I1009 19:42:51.206966  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:51.207071  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:51.226060  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.249639  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:51.419727  492391 ssh_runner.go:195] Run: systemctl --version
	I1009 19:42:51.426590  492391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:42:51.464897  492391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:42:51.469788  492391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:42:51.469905  492391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:42:51.477984  492391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:42:51.478008  492391 start.go:495] detecting cgroup driver to use...
	I1009 19:42:51.478057  492391 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:42:51.478206  492391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:42:51.496611  492391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:42:51.510943  492391 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:42:51.511065  492391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:42:51.528148  492391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:42:51.541666  492391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:42:51.657793  492391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:42:51.779525  492391 docker.go:234] disabling docker service ...
	I1009 19:42:51.779637  492391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:42:51.802448  492391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:42:51.815749  492391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:42:51.938449  492391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:42:52.059959  492391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:42:52.077555  492391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:42:52.093242  492391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:42:52.093360  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.102661  492391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:42:52.102731  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.111745  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.120624  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.129885  492391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:42:52.138800  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.148461  492391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.158553  492391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:42:52.167615  492391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:42:52.177318  492391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:42:52.184794  492391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:52.304507  492391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:42:52.430292  492391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:42:52.430376  492391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:42:52.434110  492391 start.go:563] Will wait 60s for crictl version
	I1009 19:42:52.434223  492391 ssh_runner.go:195] Run: which crictl
	I1009 19:42:52.437894  492391 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:42:52.464003  492391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:42:52.464091  492391 ssh_runner.go:195] Run: crio --version
	I1009 19:42:52.497314  492391 ssh_runner.go:195] Run: crio --version
	I1009 19:42:52.531700  492391 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:42:52.534550  492391 cli_runner.go:164] Run: docker network inspect newest-cni-532612 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:42:52.550878  492391 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:42:52.554833  492391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:42:52.567977  492391 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1009 19:42:52.570814  492391 kubeadm.go:883] updating cluster {Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:42:52.570974  492391 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:52.571063  492391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:42:52.611126  492391 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:42:52.611150  492391 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:42:52.611207  492391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:42:52.642822  492391 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:42:52.642844  492391 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:42:52.642853  492391 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 19:42:52.642965  492391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-532612 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:42:52.643055  492391 ssh_runner.go:195] Run: crio config
	I1009 19:42:52.692146  492391 cni.go:84] Creating CNI manager for ""
	I1009 19:42:52.692167  492391 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:42:52.692184  492391 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1009 19:42:52.692227  492391 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-532612 NodeName:newest-cni-532612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:42:52.692384  492391 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-532612"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:42:52.692470  492391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:42:52.700328  492391 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:42:52.700453  492391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:42:52.707662  492391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:42:52.720556  492391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:42:52.733383  492391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 19:42:52.746082  492391 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:42:52.750040  492391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:42:52.760131  492391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:52.868065  492391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:52.884071  492391 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612 for IP: 192.168.85.2
	I1009 19:42:52.884090  492391 certs.go:195] generating shared ca certs ...
	I1009 19:42:52.884106  492391 certs.go:227] acquiring lock for ca certs: {Name:mkb04535860702c1667cc2d8ee62cae6dc4e2c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:52.884241  492391 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key
	I1009 19:42:52.884285  492391 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key
	I1009 19:42:52.884291  492391 certs.go:257] generating profile certs ...
	I1009 19:42:52.884368  492391 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/client.key
	I1009 19:42:52.884412  492391 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/apiserver.key.db6af006
	I1009 19:42:52.884454  492391 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/proxy-client.key
	I1009 19:42:52.884560  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem (1338 bytes)
	W1009 19:42:52.884587  492391 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309_empty.pem, impossibly tiny 0 bytes
	I1009 19:42:52.884595  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:42:52.884619  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:42:52.884640  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:42:52.884664  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/certs/key.pem (1675 bytes)
	I1009 19:42:52.884703  492391 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem (1708 bytes)
	I1009 19:42:52.885256  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:42:52.903203  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:42:52.920601  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:42:52.938340  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:42:52.956089  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:42:52.975646  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:42:52.995347  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:42:53.016097  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/newest-cni-532612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:42:53.034537  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/certs/286309.pem --> /usr/share/ca-certificates/286309.pem (1338 bytes)
	I1009 19:42:53.057198  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/ssl/certs/2863092.pem --> /usr/share/ca-certificates/2863092.pem (1708 bytes)
	I1009 19:42:53.091221  492391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-284447/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:42:53.119761  492391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:42:53.136019  492391 ssh_runner.go:195] Run: openssl version
	I1009 19:42:53.142924  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2863092.pem && ln -fs /usr/share/ca-certificates/2863092.pem /etc/ssl/certs/2863092.pem"
	I1009 19:42:53.153356  492391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2863092.pem
	I1009 19:42:53.157233  492391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:35 /usr/share/ca-certificates/2863092.pem
	I1009 19:42:53.157350  492391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2863092.pem
	I1009 19:42:53.202287  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2863092.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:42:53.212563  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:42:53.223316  492391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:53.227656  492391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:28 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:53.227749  492391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:42:53.270585  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:42:53.280350  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286309.pem && ln -fs /usr/share/ca-certificates/286309.pem /etc/ssl/certs/286309.pem"
	I1009 19:42:53.289527  492391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286309.pem
	I1009 19:42:53.293428  492391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:35 /usr/share/ca-certificates/286309.pem
	I1009 19:42:53.293524  492391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286309.pem
	I1009 19:42:53.335297  492391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286309.pem /etc/ssl/certs/51391683.0"
	I1009 19:42:53.343923  492391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:42:53.348002  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:42:53.389325  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:42:53.436859  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:42:53.478855  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:42:53.523988  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:42:53.571261  492391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:42:53.637739  492391 kubeadm.go:400] StartCluster: {Name:newest-cni-532612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-532612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:53.637877  492391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:42:53.637954  492391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:42:53.694269  492391 cri.go:89] found id: ""
	I1009 19:42:53.694400  492391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:42:53.707369  492391 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:42:53.707448  492391 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:42:53.707547  492391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:42:53.718951  492391 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:42:53.719553  492391 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-532612" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:53.719869  492391 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-284447/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-532612" cluster setting kubeconfig missing "newest-cni-532612" context setting]
	I1009 19:42:53.720338  492391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:53.722634  492391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:42:53.733948  492391 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 19:42:53.734024  492391 kubeadm.go:601] duration metric: took 26.555655ms to restartPrimaryControlPlane
	I1009 19:42:53.734048  492391 kubeadm.go:402] duration metric: took 96.31827ms to StartCluster
	I1009 19:42:53.734078  492391 settings.go:142] acquiring lock: {Name:mk25224cfbab6a077f6a1af2e5b614b90a1c582a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:53.734175  492391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:42:53.735156  492391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/kubeconfig: {Name:mkf454b95328bf58b246900ed81ee00b397f7c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:42:53.735398  492391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:42:53.735825  492391 config.go:182] Loaded profile config "newest-cni-532612": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:53.735800  492391 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:42:53.736074  492391 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-532612"
	I1009 19:42:53.736097  492391 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-532612"
	W1009 19:42:53.736111  492391 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:42:53.736170  492391 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:53.736125  492391 addons.go:69] Setting default-storageclass=true in profile "newest-cni-532612"
	I1009 19:42:53.736265  492391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-532612"
	I1009 19:42:53.736597  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.736777  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.736079  492391 addons.go:69] Setting dashboard=true in profile "newest-cni-532612"
	I1009 19:42:53.737183  492391 addons.go:238] Setting addon dashboard=true in "newest-cni-532612"
	W1009 19:42:53.737193  492391 addons.go:247] addon dashboard should already be in state true
	I1009 19:42:53.737215  492391 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:53.737622  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.744652  492391 out.go:179] * Verifying Kubernetes components...
	I1009 19:42:53.748210  492391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:42:53.788951  492391 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:42:53.793206  492391 addons.go:238] Setting addon default-storageclass=true in "newest-cni-532612"
	W1009 19:42:53.793231  492391 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:42:53.793267  492391 host.go:66] Checking if "newest-cni-532612" exists ...
	I1009 19:42:53.793750  492391 cli_runner.go:164] Run: docker container inspect newest-cni-532612 --format={{.State.Status}}
	I1009 19:42:53.794004  492391 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:53.794024  492391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:42:53.794073  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:53.826188  492391 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:42:53.826459  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:53.836084  492391 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:42:53.840201  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:42:53.840229  492391 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:42:53.840295  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:53.853946  492391 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:53.853969  492391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:42:53.854042  492391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-532612
	I1009 19:42:53.884119  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:53.892120  492391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/newest-cni-532612/id_rsa Username:docker}
	I1009 19:42:54.128536  492391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:42:54.160879  492391 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:42:54.160958  492391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:42:54.191882  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:42:54.191908  492391 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:42:54.204158  492391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:42:54.208980  492391 api_server.go:72] duration metric: took 473.51643ms to wait for apiserver process to appear ...
	I1009 19:42:54.209008  492391 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:42:54.209028  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:54.232444  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:42:54.232468  492391 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:42:54.247337  492391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:42:54.327908  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:42:54.327942  492391 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:42:54.423465  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:42:54.423490  492391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 19:42:54.496012  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:42:54.496038  492391 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 19:42:54.515783  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:42:54.515820  492391 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:42:54.546743  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:42:54.546769  492391 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:42:54.571347  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:42:54.571372  492391 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:42:54.593864  492391 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:42:54.593901  492391 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:42:54.617393  492391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:42:53.995039  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	W1009 19:42:55.997468  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:59.020567  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:42:59.020591  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:42:59.020604  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:59.132682  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:42:59.132708  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:42:59.209911  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:59.299162  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:42:59.299241  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:42:59.709443  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:42:59.718089  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:42:59.718112  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:43:00.209745  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:43:00.231157  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:43:00.231189  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:43:00.709582  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:43:00.720395  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:43:00.720419  492391 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:43:00.821399  492391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.617209904s)
	I1009 19:43:00.821457  492391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.574097403s)
	I1009 19:43:00.821820  492391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.204354364s)
	I1009 19:43:00.825130  492391 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-532612 addons enable metrics-server
	
	I1009 19:43:00.848954  492391 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1009 19:43:00.852320  492391 addons.go:514] duration metric: took 7.116506786s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1009 19:43:01.210017  492391 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 19:43:01.219230  492391 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 19:43:01.220487  492391 api_server.go:141] control plane version: v1.34.1
	I1009 19:43:01.220517  492391 api_server.go:131] duration metric: took 7.011500991s to wait for apiserver health ...
	I1009 19:43:01.220526  492391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:43:01.227813  492391 system_pods.go:59] 8 kube-system pods found
	I1009 19:43:01.227861  492391 system_pods.go:61] "coredns-66bc5c9577-ptcc6" [9cb17d4b-1710-4794-919a-92018b128d23] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:43:01.227871  492391 system_pods.go:61] "etcd-newest-cni-532612" [5fa83761-6c4f-4748-be0e-55c99a748e7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:43:01.227881  492391 system_pods.go:61] "kindnet-l62gf" [1dff8975-257b-409c-85f7-7f11e9444ec0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 19:43:01.227889  492391 system_pods.go:61] "kube-apiserver-newest-cni-532612" [26cb7bbd-ad4d-4bbf-a096-35c75aeb359c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:43:01.227899  492391 system_pods.go:61] "kube-controller-manager-newest-cni-532612" [0e361d42-6133-4366-b817-141687d94c94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:43:01.227906  492391 system_pods.go:61] "kube-proxy-bsq7j" [3415e29c-3f95-48f5-977e-ab18e00181ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 19:43:01.227918  492391 system_pods.go:61] "kube-scheduler-newest-cni-532612" [06cc763d-090b-497c-a0ce-b6276f27ed63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:43:01.227933  492391 system_pods.go:61] "storage-provisioner" [a572509f-c910-406c-8c63-e8b030ccb29c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 19:43:01.227945  492391 system_pods.go:74] duration metric: took 7.412332ms to wait for pod list to return data ...
	I1009 19:43:01.227956  492391 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:43:01.267167  492391 default_sa.go:45] found service account: "default"
	I1009 19:43:01.267195  492391 default_sa.go:55] duration metric: took 39.22937ms for default service account to be created ...
	I1009 19:43:01.267218  492391 kubeadm.go:586] duration metric: took 7.53174934s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 19:43:01.267242  492391 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:43:01.269851  492391 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:43:01.269892  492391 node_conditions.go:123] node cpu capacity is 2
	I1009 19:43:01.269904  492391 node_conditions.go:105] duration metric: took 2.656984ms to run NodePressure ...
	I1009 19:43:01.269915  492391 start.go:241] waiting for startup goroutines ...
	I1009 19:43:01.269923  492391 start.go:246] waiting for cluster config update ...
	I1009 19:43:01.269941  492391 start.go:255] writing updated cluster config ...
	I1009 19:43:01.270279  492391 ssh_runner.go:195] Run: rm -f paused
	I1009 19:43:01.383308  492391 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:43:01.386542  492391 out.go:179] * Done! kubectl is now configured to use "newest-cni-532612" cluster and "default" namespace by default
	W1009 19:42:58.495597  488960 pod_ready.go:104] pod "coredns-66bc5c9577-xmz2b" is not "Ready", error: <nil>
	I1009 19:42:59.994465  488960 pod_ready.go:94] pod "coredns-66bc5c9577-xmz2b" is "Ready"
	I1009 19:42:59.994488  488960 pod_ready.go:86] duration metric: took 33.505743944s for pod "coredns-66bc5c9577-xmz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:42:59.999823  488960 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.015930  488960 pod_ready.go:94] pod "etcd-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:00.015961  488960 pod_ready.go:86] duration metric: took 16.107815ms for pod "etcd-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.032959  488960 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.066741  488960 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:00.066831  488960 pod_ready.go:86] duration metric: took 33.839765ms for pod "kube-apiserver-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.079729  488960 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.193537  488960 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:00.193629  488960 pod_ready.go:86] duration metric: took 113.80313ms for pod "kube-controller-manager-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.393413  488960 pod_ready.go:83] waiting for pod "kube-proxy-8nqdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.791997  488960 pod_ready.go:94] pod "kube-proxy-8nqdl" is "Ready"
	I1009 19:43:00.792029  488960 pod_ready.go:86] duration metric: took 398.581501ms for pod "kube-proxy-8nqdl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:00.993142  488960 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:01.399505  488960 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-661639" is "Ready"
	I1009 19:43:01.399528  488960 pod_ready.go:86] duration metric: took 406.36074ms for pod "kube-scheduler-default-k8s-diff-port-661639" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:43:01.399540  488960 pod_ready.go:40] duration metric: took 34.916667815s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:43:01.520271  488960 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:43:01.523934  488960 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-661639" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.36122924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.36433985Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-bsq7j/POD" id=21dd228f-dfad-46f8-85c2-0d3f4eecab19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.364410596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.373717836Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=21dd228f-dfad-46f8-85c2-0d3f4eecab19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.378949137Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=579ce777-7000-48fd-99e1-b2535eb98247 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.410313171Z" level=info msg="Ran pod sandbox 9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c with infra container: kube-system/kindnet-l62gf/POD" id=579ce777-7000-48fd-99e1-b2535eb98247 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.419196824Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9830f89f-568c-4f00-933c-786e0afc950e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.420817447Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=42394fc0-5143-45cc-bc95-0b1b766b11cb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.422385885Z" level=info msg="Creating container: kube-system/kindnet-l62gf/kindnet-cni" id=fa52d9e4-4dac-4684-b17f-dbd5290cb94e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.422700924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.448153978Z" level=info msg="Ran pod sandbox 280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893 with infra container: kube-system/kube-proxy-bsq7j/POD" id=21dd228f-dfad-46f8-85c2-0d3f4eecab19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.448270328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.450744902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.449718411Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=86098891-8d2b-43b6-b093-6747ba16b40e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.454879615Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fa2a0edf-b579-48b9-98a2-157702f1e54c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.459479597Z" level=info msg="Creating container: kube-system/kube-proxy-bsq7j/kube-proxy" id=5b32b1e4-b655-43c2-8bbe-c02bd1a620f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.466099461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.49006704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.490726059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.526694482Z" level=info msg="Created container 489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033: kube-system/kindnet-l62gf/kindnet-cni" id=fa52d9e4-4dac-4684-b17f-dbd5290cb94e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.527716091Z" level=info msg="Starting container: 489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033" id=dce47ca9-de2e-4e24-86cc-7cc0da476d1e name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.529885307Z" level=info msg="Started container" PID=1052 containerID=489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033 description=kube-system/kindnet-l62gf/kindnet-cni id=dce47ca9-de2e-4e24-86cc-7cc0da476d1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.538440332Z" level=info msg="Created container bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856: kube-system/kube-proxy-bsq7j/kube-proxy" id=5b32b1e4-b655-43c2-8bbe-c02bd1a620f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.539223825Z" level=info msg="Starting container: bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856" id=ea70dff8-d517-4b51-bd18-c4d58011ef37 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:43:00 newest-cni-532612 crio[610]: time="2025-10-09T19:43:00.55056423Z" level=info msg="Started container" PID=1053 containerID=bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856 description=kube-system/kube-proxy-bsq7j/kube-proxy id=ea70dff8-d517-4b51-bd18-c4d58011ef37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bfac8d577d5ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   280a48ed2cb0c       kube-proxy-bsq7j                            kube-system
	489b35195754a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   9a14115a86750       kindnet-l62gf                               kube-system
	9d1c411171d0c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   db98b3cbb350e       kube-controller-manager-newest-cni-532612   kube-system
	07dc64a3c26fe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   7dc7d2929683e       kube-scheduler-newest-cni-532612            kube-system
	9b7ab3972f704       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   5f3b88f90a3c6       kube-apiserver-newest-cni-532612            kube-system
	07f42cc6b9c83       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   78f1d16d447ac       etcd-newest-cni-532612                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-532612
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-532612
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=newest-cni-532612
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_42_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:42:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-532612
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 09 Oct 2025 19:42:59 +0000   Thu, 09 Oct 2025 19:42:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-532612
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0fb19f8f22e48c0983b2521c34667f3
	  System UUID:                599339a9-1ab0-448e-9b04-25350ae8a3fc
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-532612                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         45s
	  kube-system                 kindnet-l62gf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      40s
	  kube-system                 kube-apiserver-newest-cni-532612             250m (12%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-newest-cni-532612    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-bsq7j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-scheduler-newest-cni-532612             100m (5%)     0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 39s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     45s                kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s                kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s                kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           41s                node-controller  Node newest-cni-532612 event: Registered Node newest-cni-532612 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-532612 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-532612 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-532612 event: Registered Node newest-cni-532612 in Controller
	
	
	==> dmesg <==
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:42] overlayfs: idmapped layers are currently not supported
	[  +3.815530] overlayfs: idmapped layers are currently not supported
	[ +37.476110] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [07f42cc6b9c838cf62593bb8dbf355567bcb13a93ff3b637e6424beb09826678] <==
	{"level":"warn","ts":"2025-10-09T19:42:57.914227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.933279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.959546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.972458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:57.992531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.008508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.030198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.048421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.060741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.076794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.091696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.106674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.123062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.142004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.157163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.174961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.191186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.212975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.222909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.238848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.255921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.304667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.319308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.338227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:58.387903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56182","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:07 up  2:25,  0 user,  load average: 4.01, 3.37, 2.60
	Linux newest-cni-532612 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [489b35195754a14d1bd6b51b51b542263500545a2c8ee56450649056b0915033] <==
	I1009 19:43:00.715116       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:43:00.715335       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 19:43:00.715435       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:43:00.715446       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:43:00.715455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:43:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:43:00.836124       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:43:00.836197       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:43:00.836230       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:43:00.836919       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [9b7ab3972f70411906d2737adfb5a6be317ef4ac4e38127df45ba42ee748fb65] <==
	I1009 19:42:59.400486       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:42:59.406340       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:42:59.406375       1 policy_source.go:240] refreshing policies
	I1009 19:42:59.411043       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:42:59.420082       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:42:59.420152       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:42:59.455609       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:42:59.474856       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:42:59.475081       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:42:59.475141       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:42:59.475154       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:42:59.475785       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:42:59.488182       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:42:59.488696       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:42:59.962443       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:42:59.989909       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:43:00.146913       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:43:00.245495       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:43:00.335909       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:43:00.414352       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:43:00.592932       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.250.126"}
	I1009 19:43:00.635680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.45.181"}
	I1009 19:43:02.873011       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:43:02.973021       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:43:03.083189       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9d1c411171d0c75660af47fb4916909e0a344da9c5b1ab9af1308b477612ba13] <==
	I1009 19:43:02.585168       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:43:02.585930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:02.585953       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:43:02.585961       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:43:02.587642       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:43:02.591392       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:43:02.595077       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:43:02.595235       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:43:02.597378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:43:02.612091       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:43:02.613691       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:43:02.616003       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:43:02.616202       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 19:43:02.616577       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:43:02.616619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:43:02.616673       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 19:43:02.616719       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:43:02.618507       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:43:02.619988       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:43:02.631424       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:43:02.632959       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:43:02.633068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:02.635844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:43:02.642491       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:43:02.646268       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [bfac8d577d5ab27d0811c55647f19742c44ea15810ddc86b9e5bec5ab3582856] <==
	I1009 19:43:00.763993       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:43:00.900018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:43:01.003788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:43:01.003822       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 19:43:01.003914       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:43:01.024810       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:43:01.024886       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:43:01.028566       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:43:01.028907       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:43:01.028932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:01.030275       1 config.go:200] "Starting service config controller"
	I1009 19:43:01.030347       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:43:01.030735       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:43:01.030785       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:43:01.031494       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:43:01.034293       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:43:01.031965       1 config.go:309] "Starting node config controller"
	I1009 19:43:01.034381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:43:01.034414       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:43:01.131502       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:43:01.134809       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:43:01.134821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [07dc64a3c26fed0535c468419ca5c44104c3d30d68ebfb1c21ea7919703acf23] <==
	I1009 19:42:56.813913       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:42:59.011677       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:42:59.011722       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:42:59.011733       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:42:59.011740       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:42:59.400671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:42:59.400699       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:59.409246       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:42:59.409387       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:59.409407       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:59.409424       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:42:59.510731       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.158259     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.435448     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-532612\" already exists" pod="kube-system/etcd-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.435485     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.469709     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-532612\" already exists" pod="kube-system/kube-apiserver-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.469758     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.505309     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.505438     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.505466     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.507559     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.522248     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-532612\" already exists" pod="kube-system/kube-controller-manager-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: I1009 19:42:59.522308     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-532612"
	Oct 09 19:42:59 newest-cni-532612 kubelet[726]: E1009 19:42:59.549993     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-532612\" already exists" pod="kube-system/kube-scheduler-newest-cni-532612"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.018977     726 apiserver.go:52] "Watching apiserver"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.042475     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.102565     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3415e29c-3f95-48f5-977e-ab18e00181ab-lib-modules\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.102906     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-cni-cfg\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.103107     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-xtables-lock\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.103677     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3415e29c-3f95-48f5-977e-ab18e00181ab-xtables-lock\") pod \"kube-proxy-bsq7j\" (UID: \"3415e29c-3f95-48f5-977e-ab18e00181ab\") " pod="kube-system/kube-proxy-bsq7j"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.105200     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dff8975-257b-409c-85f7-7f11e9444ec0-lib-modules\") pod \"kindnet-l62gf\" (UID: \"1dff8975-257b-409c-85f7-7f11e9444ec0\") " pod="kube-system/kindnet-l62gf"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: I1009 19:43:00.165021     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: W1009 19:43:00.397852     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/crio-9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c WatchSource:0}: Error finding container 9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c: Status 404 returned error can't find the container with id 9a14115a86750ec5ee0b0b03eecfd3b89c3a6e501a23a533ca6b507fd4d06f2c
	Oct 09 19:43:00 newest-cni-532612 kubelet[726]: W1009 19:43:00.410202     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2d63c6e10b44df29239be264a2d5b4e80c17d3a2a2c11a9c5d5ccc55404e75c9/crio-280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893 WatchSource:0}: Error finding container 280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893: Status 404 returned error can't find the container with id 280a48ed2cb0c6df225ce0596ff9f3d88fa71b404e4cc0d8fca1b43c8b0e4893
	Oct 09 19:43:02 newest-cni-532612 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:43:02 newest-cni-532612 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:43:02 newest-cni-532612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-532612 -n newest-cni-532612
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-532612 -n newest-cni-532612: exit status 2 (376.684142ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-532612 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g: exit status 1 (83.008343ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ptcc6" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nd9vc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xxc5g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-532612 describe pod coredns-66bc5c9577-ptcc6 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nd9vc kubernetes-dashboard-855c9754f9-xxc5g: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-661639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-661639 --alsologtostderr -v=1: exit status 80 (2.085742259s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-661639 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:43:13.516587  496015 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:43:13.516804  496015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:13.516828  496015 out.go:374] Setting ErrFile to fd 2...
	I1009 19:43:13.516846  496015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:13.517116  496015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:43:13.517387  496015 out.go:368] Setting JSON to false
	I1009 19:43:13.517432  496015 mustload.go:65] Loading cluster: default-k8s-diff-port-661639
	I1009 19:43:13.517838  496015 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:13.518362  496015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-661639 --format={{.State.Status}}
	I1009 19:43:13.555670  496015 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:43:13.556039  496015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:43:13.649500  496015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:69 SystemTime:2025-10-09 19:43:13.639849798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:43:13.650242  496015 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-661639 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 19:43:13.655787  496015 out.go:179] * Pausing node default-k8s-diff-port-661639 ... 
	I1009 19:43:13.659896  496015 host.go:66] Checking if "default-k8s-diff-port-661639" exists ...
	I1009 19:43:13.660234  496015 ssh_runner.go:195] Run: systemctl --version
	I1009 19:43:13.660278  496015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-661639
	I1009 19:43:13.679057  496015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/default-k8s-diff-port-661639/id_rsa Username:docker}
	I1009 19:43:13.796777  496015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:13.811615  496015 pause.go:52] kubelet running: true
	I1009 19:43:13.811682  496015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:14.115499  496015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:14.115609  496015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:14.242549  496015 cri.go:89] found id: "6cbe6c67db4398fd97cd8a706d13b1c09be849299e98a9105cec8cf358e5cad5"
	I1009 19:43:14.242611  496015 cri.go:89] found id: "6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7"
	I1009 19:43:14.242641  496015 cri.go:89] found id: "28c9d8313df30cc8a4b34901f3b83b589a35e68d03fda99ea2845163cabc4713"
	I1009 19:43:14.242662  496015 cri.go:89] found id: "3004c804b31e0d8773c58d46c285b0e2e8a522ccfabf81c57f67008d4414bcd6"
	I1009 19:43:14.242687  496015 cri.go:89] found id: "5e41ebd074984c566e8b2d974f6ba815a8ade1b0fe66ca6de30500fa900fc1f8"
	I1009 19:43:14.242714  496015 cri.go:89] found id: "61a4c1355e9141371be19936476667f45feaf5cb8cb543e4b20e6dca262e451c"
	I1009 19:43:14.242733  496015 cri.go:89] found id: "768ba8a2857f78e85519fbea6febfc8bd4969620ca951c7d260ada4b7c79e0d0"
	I1009 19:43:14.242756  496015 cri.go:89] found id: "fdf381ba0047cf30aba12aa77f6c2451060e006b9680d6c86f071cb8a93a48aa"
	I1009 19:43:14.242793  496015 cri.go:89] found id: "098c492e4b1b7624dacdb34909738100a576f8ba91c34e3d4554ab1dd15c385a"
	I1009 19:43:14.242823  496015 cri.go:89] found id: "f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12"
	I1009 19:43:14.242852  496015 cri.go:89] found id: "f3f7bcb0354a582c2e24deff0f77a6efa7f0bb494ebd1df8e4c28627cd72ba19"
	I1009 19:43:14.242880  496015 cri.go:89] found id: ""
	I1009 19:43:14.242944  496015 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:14.255592  496015 retry.go:31] will retry after 183.834486ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:14Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:43:14.440081  496015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:14.458353  496015 pause.go:52] kubelet running: false
	I1009 19:43:14.458453  496015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:14.710498  496015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:14.710622  496015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:14.811671  496015 cri.go:89] found id: "6cbe6c67db4398fd97cd8a706d13b1c09be849299e98a9105cec8cf358e5cad5"
	I1009 19:43:14.811748  496015 cri.go:89] found id: "6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7"
	I1009 19:43:14.811769  496015 cri.go:89] found id: "28c9d8313df30cc8a4b34901f3b83b589a35e68d03fda99ea2845163cabc4713"
	I1009 19:43:14.811788  496015 cri.go:89] found id: "3004c804b31e0d8773c58d46c285b0e2e8a522ccfabf81c57f67008d4414bcd6"
	I1009 19:43:14.811812  496015 cri.go:89] found id: "5e41ebd074984c566e8b2d974f6ba815a8ade1b0fe66ca6de30500fa900fc1f8"
	I1009 19:43:14.811839  496015 cri.go:89] found id: "61a4c1355e9141371be19936476667f45feaf5cb8cb543e4b20e6dca262e451c"
	I1009 19:43:14.811861  496015 cri.go:89] found id: "768ba8a2857f78e85519fbea6febfc8bd4969620ca951c7d260ada4b7c79e0d0"
	I1009 19:43:14.811884  496015 cri.go:89] found id: "fdf381ba0047cf30aba12aa77f6c2451060e006b9680d6c86f071cb8a93a48aa"
	I1009 19:43:14.811907  496015 cri.go:89] found id: "098c492e4b1b7624dacdb34909738100a576f8ba91c34e3d4554ab1dd15c385a"
	I1009 19:43:14.811941  496015 cri.go:89] found id: "f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12"
	I1009 19:43:14.811962  496015 cri.go:89] found id: "f3f7bcb0354a582c2e24deff0f77a6efa7f0bb494ebd1df8e4c28627cd72ba19"
	I1009 19:43:14.811985  496015 cri.go:89] found id: ""
	I1009 19:43:14.812073  496015 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:14.827766  496015 retry.go:31] will retry after 241.5399ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:14Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:43:15.070324  496015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:43:15.086925  496015 pause.go:52] kubelet running: false
	I1009 19:43:15.087009  496015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 19:43:15.281753  496015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 19:43:15.281849  496015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 19:43:15.358912  496015 cri.go:89] found id: "6cbe6c67db4398fd97cd8a706d13b1c09be849299e98a9105cec8cf358e5cad5"
	I1009 19:43:15.358933  496015 cri.go:89] found id: "6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7"
	I1009 19:43:15.358939  496015 cri.go:89] found id: "28c9d8313df30cc8a4b34901f3b83b589a35e68d03fda99ea2845163cabc4713"
	I1009 19:43:15.358943  496015 cri.go:89] found id: "3004c804b31e0d8773c58d46c285b0e2e8a522ccfabf81c57f67008d4414bcd6"
	I1009 19:43:15.358946  496015 cri.go:89] found id: "5e41ebd074984c566e8b2d974f6ba815a8ade1b0fe66ca6de30500fa900fc1f8"
	I1009 19:43:15.358950  496015 cri.go:89] found id: "61a4c1355e9141371be19936476667f45feaf5cb8cb543e4b20e6dca262e451c"
	I1009 19:43:15.358953  496015 cri.go:89] found id: "768ba8a2857f78e85519fbea6febfc8bd4969620ca951c7d260ada4b7c79e0d0"
	I1009 19:43:15.358963  496015 cri.go:89] found id: "fdf381ba0047cf30aba12aa77f6c2451060e006b9680d6c86f071cb8a93a48aa"
	I1009 19:43:15.358966  496015 cri.go:89] found id: "098c492e4b1b7624dacdb34909738100a576f8ba91c34e3d4554ab1dd15c385a"
	I1009 19:43:15.358973  496015 cri.go:89] found id: "f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12"
	I1009 19:43:15.358980  496015 cri.go:89] found id: "f3f7bcb0354a582c2e24deff0f77a6efa7f0bb494ebd1df8e4c28627cd72ba19"
	I1009 19:43:15.358984  496015 cri.go:89] found id: ""
	I1009 19:43:15.359036  496015 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:43:15.415121  496015 out.go:203] 
	W1009 19:43:15.446280  496015 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:43:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:43:15.446309  496015 out.go:285] * 
	* 
	W1009 19:43:15.453568  496015 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:43:15.508640  496015 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-661639 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-661639
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-661639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438",
	        "Created": "2025-10-09T19:40:19.30361096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:42:07.279603606Z",
	            "FinishedAt": "2025-10-09T19:42:06.275932615Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/hostname",
	        "HostsPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/hosts",
	        "LogPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438-json.log",
	        "Name": "/default-k8s-diff-port-661639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-661639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-661639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438",
	                "LowerDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-661639",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-661639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-661639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-661639",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-661639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6df7309405da38ef4757f3fbb0c227e40127839cbf2b6a95b54392b5a07d8dc6",
	            "SandboxKey": "/var/run/docker/netns/6df7309405da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-661639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:a0:a6:ce:0e:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86fdc851eb8ad7fec0353db31405ee8fa251cbc2c81dd836e7fbb59e4102b63e",
	                    "EndpointID": "a66130a12f3b5558949058ac4b0df1f632d09fd53604de871bcc8c2f2505b3bf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-661639",
	                        "09130103b04f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639: exit status 2 (344.486552ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-661639 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-661639 logs -n 25: (2.378783938s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-661639 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-661639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-532612 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-532612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ image   │ newest-cni-532612 image list --format=json                                                                                                                                                                                                    │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-532612 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ delete  │ -p newest-cni-532612                                                                                                                                                                                                                          │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ delete  │ -p newest-cni-532612                                                                                                                                                                                                                          │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ start   │ -p auto-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-224541                  │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ image   │ default-k8s-diff-port-661639 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ pause   │ -p default-k8s-diff-port-661639 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:43:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:43:11.031960  495668 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:43:11.032091  495668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:11.032110  495668 out.go:374] Setting ErrFile to fd 2...
	I1009 19:43:11.032116  495668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:11.032388  495668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:43:11.032801  495668 out.go:368] Setting JSON to false
	I1009 19:43:11.033697  495668 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8742,"bootTime":1760030249,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:43:11.033764  495668 start.go:141] virtualization:  
	I1009 19:43:11.039770  495668 out.go:179] * [auto-224541] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:43:11.043026  495668 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:43:11.043138  495668 notify.go:220] Checking for updates...
	I1009 19:43:11.049199  495668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:43:11.052220  495668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:43:11.055296  495668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:43:11.058394  495668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:43:11.067017  495668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:43:11.070678  495668 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:11.070827  495668 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:43:11.094686  495668 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:43:11.094854  495668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:43:11.154588  495668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:43:11.144659746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:43:11.154704  495668 docker.go:318] overlay module found
	I1009 19:43:11.157952  495668 out.go:179] * Using the docker driver based on user configuration
	I1009 19:43:11.161678  495668 start.go:305] selected driver: docker
	I1009 19:43:11.161701  495668 start.go:925] validating driver "docker" against <nil>
	I1009 19:43:11.161715  495668 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:43:11.162552  495668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:43:11.217358  495668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:43:11.208314137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:43:11.217509  495668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:43:11.217738  495668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:43:11.220726  495668 out.go:179] * Using Docker driver with root privileges
	I1009 19:43:11.223596  495668 cni.go:84] Creating CNI manager for ""
	I1009 19:43:11.223671  495668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:43:11.223685  495668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:43:11.223772  495668 start.go:349] cluster config:
	{Name:auto-224541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-224541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1009 19:43:11.227148  495668 out.go:179] * Starting "auto-224541" primary control-plane node in "auto-224541" cluster
	I1009 19:43:11.230195  495668 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:43:11.233167  495668 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:43:11.236016  495668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:11.236061  495668 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:43:11.236072  495668 cache.go:64] Caching tarball of preloaded images
	I1009 19:43:11.236100  495668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:43:11.236155  495668 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:43:11.236165  495668 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:43:11.236284  495668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/config.json ...
	I1009 19:43:11.236302  495668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/config.json: {Name:mk0a58dbf54a36ca916ce532ad36dae09e187626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:43:11.255315  495668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:43:11.255340  495668 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:43:11.255360  495668 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:43:11.255383  495668 start.go:360] acquireMachinesLock for auto-224541: {Name:mkd19eaac244b4ec07885780d95fbae4d5b76489 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:43:11.255508  495668 start.go:364] duration metric: took 105.462µs to acquireMachinesLock for "auto-224541"
	I1009 19:43:11.255541  495668 start.go:93] Provisioning new machine with config: &{Name:auto-224541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-224541 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:43:11.255623  495668 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:43:11.259067  495668 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:43:11.259315  495668 start.go:159] libmachine.API.Create for "auto-224541" (driver="docker")
	I1009 19:43:11.259379  495668 client.go:168] LocalClient.Create starting
	I1009 19:43:11.259459  495668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:43:11.259501  495668 main.go:141] libmachine: Decoding PEM data...
	I1009 19:43:11.259518  495668 main.go:141] libmachine: Parsing certificate...
	I1009 19:43:11.259583  495668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:43:11.259607  495668 main.go:141] libmachine: Decoding PEM data...
	I1009 19:43:11.259619  495668 main.go:141] libmachine: Parsing certificate...
	I1009 19:43:11.260000  495668 cli_runner.go:164] Run: docker network inspect auto-224541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:43:11.276896  495668 cli_runner.go:211] docker network inspect auto-224541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:43:11.276985  495668 network_create.go:284] running [docker network inspect auto-224541] to gather additional debugging logs...
	I1009 19:43:11.277004  495668 cli_runner.go:164] Run: docker network inspect auto-224541
	W1009 19:43:11.296308  495668 cli_runner.go:211] docker network inspect auto-224541 returned with exit code 1
	I1009 19:43:11.296335  495668 network_create.go:287] error running [docker network inspect auto-224541]: docker network inspect auto-224541: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-224541 not found
	I1009 19:43:11.296361  495668 network_create.go:289] output of [docker network inspect auto-224541]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-224541 not found
	
	** /stderr **
	I1009 19:43:11.296461  495668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:43:11.313176  495668 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:43:11.313531  495668 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:43:11.313750  495668 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:43:11.314040  495668 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86fdc851eb8a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:2e:ad:fa:a7:05} reservation:<nil>}
	I1009 19:43:11.314646  495668 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0d2c0}
	I1009 19:43:11.314673  495668 network_create.go:124] attempt to create docker network auto-224541 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 19:43:11.314735  495668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-224541 auto-224541
	I1009 19:43:11.382997  495668 network_create.go:108] docker network auto-224541 192.168.85.0/24 created
	I1009 19:43:11.383029  495668 kic.go:121] calculated static IP "192.168.85.2" for the "auto-224541" container
	I1009 19:43:11.383125  495668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:43:11.399926  495668 cli_runner.go:164] Run: docker volume create auto-224541 --label name.minikube.sigs.k8s.io=auto-224541 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:43:11.421821  495668 oci.go:103] Successfully created a docker volume auto-224541
	I1009 19:43:11.421907  495668 cli_runner.go:164] Run: docker run --rm --name auto-224541-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-224541 --entrypoint /usr/bin/test -v auto-224541:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:43:11.951843  495668 oci.go:107] Successfully prepared a docker volume auto-224541
	I1009 19:43:11.951933  495668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:11.951949  495668 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:43:11.952024  495668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-224541:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.382676562Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.387433798Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.387592397Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.387664644Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.395551624Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.395587194Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.395610595Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.401692696Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.401839718Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.401925241Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.407570077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.407612367Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.624696932Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2fde7b6a-f61b-4453-bcb6-a91739a0c233 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.626265886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=29c9f748-5b49-42f7-917f-bfdb0d81e152 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.627409564Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper" id=966cb053-7123-4d9d-8055-d0d74340ae43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.627610174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.63983996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.640566829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.666529017Z" level=info msg="Created container f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper" id=966cb053-7123-4d9d-8055-d0d74340ae43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.668660892Z" level=info msg="Starting container: f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12" id=37ace9fa-6f41-44de-b4a0-215731e5ba01 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.671115773Z" level=info msg="Started container" PID=1726 containerID=f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper id=37ace9fa-6f41-44de-b4a0-215731e5ba01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=267213d97cdf6f5f7b49d2c2db2bc1b5bd0d671f82f6185f3855e7626f38d0d0
	Oct 09 19:43:12 default-k8s-diff-port-661639 conmon[1724]: conmon f99a0322f89ddcd05f66 <ninfo>: container 1726 exited with status 1
	Oct 09 19:43:13 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:13.249864056Z" level=info msg="Removing container: 66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a" id=80d09538-0b55-4758-af9f-6b28ae139bed name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:43:13 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:13.270535169Z" level=info msg="Error loading conmon cgroup of container 66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a: cgroup deleted" id=80d09538-0b55-4758-af9f-6b28ae139bed name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:43:13 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:13.275650678Z" level=info msg="Removed container 66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper" id=80d09538-0b55-4758-af9f-6b28ae139bed name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f99a0322f89dd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago        Exited              dashboard-metrics-scraper   3                   267213d97cdf6       dashboard-metrics-scraper-6ffb444bf9-cvwrq             kubernetes-dashboard
	6cbe6c67db439       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   df7684b3cd285       storage-provisioner                                    kube-system
	f3f7bcb0354a5       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   8ab5b80a13ab3       kubernetes-dashboard-855c9754f9-zdn2m                  kubernetes-dashboard
	35a1d9f668e7d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   48e73eb4a9518       busybox                                                default
	6404f07f9d160       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   df7684b3cd285       storage-provisioner                                    kube-system
	28c9d8313df30       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   2867e5479d1c2       kube-proxy-8nqdl                                       kube-system
	3004c804b31e0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   99cc4cc892e1d       kindnet-29w5k                                          kube-system
	5e41ebd074984       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   da69533526412       coredns-66bc5c9577-xmz2b                               kube-system
	61a4c1355e914       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fe62ec7037c31       kube-controller-manager-default-k8s-diff-port-661639   kube-system
	768ba8a2857f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4a75f48153c80       etcd-default-k8s-diff-port-661639                      kube-system
	fdf381ba0047c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   0cc8c9caedb01       kube-apiserver-default-k8s-diff-port-661639            kube-system
	098c492e4b1b7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   932bb35ae6693       kube-scheduler-default-k8s-diff-port-661639            kube-system
	
	
	==> coredns [5e41ebd074984c566e8b2d974f6ba815a8ade1b0fe66ca6de30500fa900fc1f8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38358 - 31736 "HINFO IN 5600871389416133094.1330390292693617133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034317052s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-661639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-661639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=default-k8s-diff-port-661639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_40_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:40:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-661639
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:43:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:41:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-661639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 522e28c9acf042da9c57e8cd1d193dc7
	  System UUID:                7c98678a-bd01-4444-9c47-8681509e122a
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-xmz2b                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-default-k8s-diff-port-661639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-29w5k                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-661639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-661639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-8nqdl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-661639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cvwrq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zdn2m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m20s                  node-controller  Node default-k8s-diff-port-661639 event: Registered Node default-k8s-diff-port-661639 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-661639 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node default-k8s-diff-port-661639 event: Registered Node default-k8s-diff-port-661639 in Controller
	
	
	==> dmesg <==
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:42] overlayfs: idmapped layers are currently not supported
	[  +3.815530] overlayfs: idmapped layers are currently not supported
	[ +37.476110] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [768ba8a2857f78e85519fbea6febfc8bd4969620ca951c7d260ada4b7c79e0d0] <==
	{"level":"warn","ts":"2025-10-09T19:42:19.784678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:19.855324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:19.903620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:19.956285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.036713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.082751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.227743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.262451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.298946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.353943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.410498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.446243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.511307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.534643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.577658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.650194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.686106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.729904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.784506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.823981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.884509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.935299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:21.030329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:21.047400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:21.222979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:17 up  2:25,  0 user,  load average: 3.86, 3.36, 2.61
	Linux default-k8s-diff-port-661639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3004c804b31e0d8773c58d46c285b0e2e8a522ccfabf81c57f67008d4414bcd6] <==
	I1009 19:42:25.117523       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:42:25.117914       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 19:42:25.118249       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:42:25.118298       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:42:25.118310       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:42:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:42:25.372862       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:42:25.372905       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:42:25.372916       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:42:25.379348       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:42:55.373164       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:42:55.373279       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:42:55.375592       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:42:55.375691       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 19:42:56.673666       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:42:56.673702       1 metrics.go:72] Registering metrics
	I1009 19:42:56.673754       1 controller.go:711] "Syncing nftables rules"
	I1009 19:43:05.377334       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:43:05.377417       1 main.go:301] handling current node
	I1009 19:43:15.372437       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:43:15.372475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fdf381ba0047cf30aba12aa77f6c2451060e006b9680d6c86f071cb8a93a48aa] <==
	I1009 19:42:23.965929       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:42:23.965968       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:42:23.984308       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:42:23.984607       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:42:23.984962       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:42:23.985026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:42:24.035465       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:42:24.035891       1 aggregator.go:171] initial CRD sync complete...
	I1009 19:42:24.035943       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:42:24.035975       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:42:24.036002       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:42:24.062269       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 19:42:24.081881       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:42:24.178970       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:42:24.214836       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:42:24.225670       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1009 19:42:24.230568       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:42:24.748261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:42:25.008329       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:42:25.086875       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:42:25.398703       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.185.147"}
	I1009 19:42:25.523024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.138.250"}
	I1009 19:42:28.536793       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:42:28.805446       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:42:28.859149       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [61a4c1355e9141371be19936476667f45feaf5cb8cb543e4b20e6dca262e451c] <==
	I1009 19:42:28.473032       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:42:28.473074       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 19:42:28.484345       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:42:28.484567       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:42:28.484722       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:42:28.484900       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-661639"
	I1009 19:42:28.485039       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:42:28.498333       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:42:28.498299       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:42:28.498503       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:42:28.498546       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:42:28.501398       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:42:28.502027       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:42:28.502417       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:42:28.502611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:42:28.502654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:42:28.502678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:42:28.502706       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:42:28.511841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:42:28.520838       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:42:28.546890       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:42:28.548192       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:42:28.555597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:42:28.555623       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:42:28.555630       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [28c9d8313df30cc8a4b34901f3b83b589a35e68d03fda99ea2845163cabc4713] <==
	I1009 19:42:25.685049       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:42:25.955181       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:42:26.058757       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:42:26.059726       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 19:42:26.059938       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:42:26.116899       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:42:26.117023       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:42:26.124961       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:42:26.125310       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:42:26.125335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:26.126712       1 config.go:200] "Starting service config controller"
	I1009 19:42:26.126739       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:42:26.126755       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:42:26.126759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:42:26.126773       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:42:26.126777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:42:26.127753       1 config.go:309] "Starting node config controller"
	I1009 19:42:26.127832       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:42:26.127862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:42:26.227619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:42:26.227633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:42:26.227663       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [098c492e4b1b7624dacdb34909738100a576f8ba91c34e3d4554ab1dd15c385a] <==
	I1009 19:42:20.743011       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:42:25.544677       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:42:25.544708       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:25.587961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:42:25.588114       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:25.589078       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:25.588082       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:42:25.589221       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:42:25.588127       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:42:25.601926       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:42:25.588140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:42:25.689204       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:25.689325       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:42:25.707205       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:42:29 default-k8s-diff-port-661639 kubelet[778]: W1009 19:42:29.403014     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/crio-8ab5b80a13ab390d5a7058c9ecd12c325390cf21ce548bf208c725b1401a4257 WatchSource:0}: Error finding container 8ab5b80a13ab390d5a7058c9ecd12c325390cf21ce548bf208c725b1401a4257: Status 404 returned error can't find the container with id 8ab5b80a13ab390d5a7058c9ecd12c325390cf21ce548bf208c725b1401a4257
	Oct 09 19:42:29 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:29.806895     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 09 19:42:36 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:36.118706     778 scope.go:117] "RemoveContainer" containerID="b51f6314a1d6036b6895183f47828a41817c274e7e796a5bda9004885c51bf71"
	Oct 09 19:42:37 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:37.123131     778 scope.go:117] "RemoveContainer" containerID="b51f6314a1d6036b6895183f47828a41817c274e7e796a5bda9004885c51bf71"
	Oct 09 19:42:37 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:37.133488     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:37 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:37.133821     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:38 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:38.127758     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:38 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:38.127931     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:39 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:39.332903     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:39 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:39.333119     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:50 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:50.623229     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:51.172307     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:51.172679     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:51.172966     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:51.206349     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zdn2m" podStartSLOduration=12.822849499 podStartE2EDuration="23.206320941s" podCreationTimestamp="2025-10-09 19:42:28 +0000 UTC" firstStartedPulling="2025-10-09 19:42:29.410815864 +0000 UTC m=+14.244198341" lastFinishedPulling="2025-10-09 19:42:39.794287305 +0000 UTC m=+24.627669783" observedRunningTime="2025-10-09 19:42:40.152432116 +0000 UTC m=+24.985814594" watchObservedRunningTime="2025-10-09 19:42:51.206320941 +0000 UTC m=+36.039703427"
	Oct 09 19:42:56 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:56.188383     778 scope.go:117] "RemoveContainer" containerID="6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7"
	Oct 09 19:42:59 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:59.333426     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:42:59 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:59.333608     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:43:12 default-k8s-diff-port-661639 kubelet[778]: I1009 19:43:12.623646     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:43:13 default-k8s-diff-port-661639 kubelet[778]: I1009 19:43:13.236553     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:43:13 default-k8s-diff-port-661639 kubelet[778]: I1009 19:43:13.236840     778 scope.go:117] "RemoveContainer" containerID="f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12"
	Oct 09 19:43:13 default-k8s-diff-port-661639 kubelet[778]: E1009 19:43:13.236997     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:43:14 default-k8s-diff-port-661639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:43:14 default-k8s-diff-port-661639 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:43:14 default-k8s-diff-port-661639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f3f7bcb0354a582c2e24deff0f77a6efa7f0bb494ebd1df8e4c28627cd72ba19] <==
	2025/10/09 19:42:39 Using namespace: kubernetes-dashboard
	2025/10/09 19:42:39 Using in-cluster config to connect to apiserver
	2025/10/09 19:42:39 Using secret token for csrf signing
	2025/10/09 19:42:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:42:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:42:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 19:42:39 Generating JWE encryption key
	2025/10/09 19:42:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:42:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:42:40 Initializing JWE encryption key from synchronized object
	2025/10/09 19:42:40 Creating in-cluster Sidecar client
	2025/10/09 19:42:40 Serving insecurely on HTTP port: 9090
	2025/10/09 19:42:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:43:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:42:39 Starting overwatch
	
	
	==> storage-provisioner [6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7] <==
	I1009 19:42:25.219897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:42:55.224482       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6cbe6c67db4398fd97cd8a706d13b1c09be849299e98a9105cec8cf358e5cad5] <==
	I1009 19:42:56.283667       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:42:56.303712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:42:56.303840       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:42:56.306094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:42:59.762020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:04.022528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:07.628566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:10.682207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:13.706610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:13.722415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:43:13.722596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:43:13.722797       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661639_523d3490-97e2-4084-8e87-7cd89664ca5b!
	I1009 19:43:13.724005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36623463-521a-4e44-abb0-3a458f21ddd5", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-661639_523d3490-97e2-4084-8e87-7cd89664ca5b became leader
	W1009 19:43:13.746174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:13.758846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:43:13.824059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661639_523d3490-97e2-4084-8e87-7cd89664ca5b!
	W1009 19:43:15.762728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:15.778380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:17.792646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:17.810034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639: exit status 2 (517.280633ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-661639
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-661639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438",
	        "Created": "2025-10-09T19:40:19.30361096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:42:07.279603606Z",
	            "FinishedAt": "2025-10-09T19:42:06.275932615Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/hostname",
	        "HostsPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/hosts",
	        "LogPath": "/var/lib/docker/containers/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438-json.log",
	        "Name": "/default-k8s-diff-port-661639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-661639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-661639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438",
	                "LowerDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b-init/diff:/var/lib/docker/overlay2/f98f3f944554ab6caac62e32a1888582f6fc9349322b33fdd1e0eeff29a7d22a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9da4f121c4b2549433f98ccb497201fb86852c1799800a28e68d334f53c9e3b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-661639",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-661639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-661639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-661639",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-661639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6df7309405da38ef4757f3fbb0c227e40127839cbf2b6a95b54392b5a07d8dc6",
	            "SandboxKey": "/var/run/docker/netns/6df7309405da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-661639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:a0:a6:ce:0e:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86fdc851eb8ad7fec0353db31405ee8fa251cbc2c81dd836e7fbb59e4102b63e",
	                    "EndpointID": "a66130a12f3b5558949058ac4b0df1f632d09fd53604de871bcc8c2f2505b3bf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-661639",
	                        "09130103b04f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639: exit status 2 (418.258218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-661639 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-661639 logs -n 25: (1.556475283s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-678119                                                                                                                                                                                                                          │ no-preload-678119            │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ delete  │ -p disable-driver-mounts-557073                                                                                                                                                                                                               │ disable-driver-mounts-557073 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:40 UTC │
	│ start   │ -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:40 UTC │ 09 Oct 25 19:41 UTC │
	│ image   │ embed-certs-779570 image list --format=json                                                                                                                                                                                                   │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-779570 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-779570                                                                                                                                                                                                                         │ embed-certs-779570           │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-661639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-661639 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:41 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-661639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-532612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-532612 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-532612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:42 UTC │ 09 Oct 25 19:43 UTC │
	│ image   │ newest-cni-532612 image list --format=json                                                                                                                                                                                                    │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-532612 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ delete  │ -p newest-cni-532612                                                                                                                                                                                                                          │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ delete  │ -p newest-cni-532612                                                                                                                                                                                                                          │ newest-cni-532612            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ start   │ -p auto-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-224541                  │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ image   │ default-k8s-diff-port-661639 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ pause   │ -p default-k8s-diff-port-661639 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-661639 │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:43:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:43:11.031960  495668 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:43:11.032091  495668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:11.032110  495668 out.go:374] Setting ErrFile to fd 2...
	I1009 19:43:11.032116  495668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:11.032388  495668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:43:11.032801  495668 out.go:368] Setting JSON to false
	I1009 19:43:11.033697  495668 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8742,"bootTime":1760030249,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:43:11.033764  495668 start.go:141] virtualization:  
	I1009 19:43:11.039770  495668 out.go:179] * [auto-224541] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:43:11.043026  495668 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:43:11.043138  495668 notify.go:220] Checking for updates...
	I1009 19:43:11.049199  495668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:43:11.052220  495668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:43:11.055296  495668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:43:11.058394  495668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:43:11.067017  495668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:43:11.070678  495668 config.go:182] Loaded profile config "default-k8s-diff-port-661639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:11.070827  495668 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:43:11.094686  495668 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:43:11.094854  495668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:43:11.154588  495668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:43:11.144659746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:43:11.154704  495668 docker.go:318] overlay module found
	I1009 19:43:11.157952  495668 out.go:179] * Using the docker driver based on user configuration
	I1009 19:43:11.161678  495668 start.go:305] selected driver: docker
	I1009 19:43:11.161701  495668 start.go:925] validating driver "docker" against <nil>
	I1009 19:43:11.161715  495668 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:43:11.162552  495668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:43:11.217358  495668 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:43:11.208314137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:43:11.217509  495668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:43:11.217738  495668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:43:11.220726  495668 out.go:179] * Using Docker driver with root privileges
	I1009 19:43:11.223596  495668 cni.go:84] Creating CNI manager for ""
	I1009 19:43:11.223671  495668 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:43:11.223685  495668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:43:11.223772  495668 start.go:349] cluster config:
	{Name:auto-224541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-224541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1009 19:43:11.227148  495668 out.go:179] * Starting "auto-224541" primary control-plane node in "auto-224541" cluster
	I1009 19:43:11.230195  495668 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:43:11.233167  495668 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:43:11.236016  495668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:11.236061  495668 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:43:11.236072  495668 cache.go:64] Caching tarball of preloaded images
	I1009 19:43:11.236100  495668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:43:11.236155  495668 preload.go:238] Found /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:43:11.236165  495668 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:43:11.236284  495668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/config.json ...
	I1009 19:43:11.236302  495668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/config.json: {Name:mk0a58dbf54a36ca916ce532ad36dae09e187626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:43:11.255315  495668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:43:11.255340  495668 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:43:11.255360  495668 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:43:11.255383  495668 start.go:360] acquireMachinesLock for auto-224541: {Name:mkd19eaac244b4ec07885780d95fbae4d5b76489 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:43:11.255508  495668 start.go:364] duration metric: took 105.462µs to acquireMachinesLock for "auto-224541"
	I1009 19:43:11.255541  495668 start.go:93] Provisioning new machine with config: &{Name:auto-224541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-224541 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:43:11.255623  495668 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:43:11.259067  495668 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 19:43:11.259315  495668 start.go:159] libmachine.API.Create for "auto-224541" (driver="docker")
	I1009 19:43:11.259379  495668 client.go:168] LocalClient.Create starting
	I1009 19:43:11.259459  495668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/ca.pem
	I1009 19:43:11.259501  495668 main.go:141] libmachine: Decoding PEM data...
	I1009 19:43:11.259518  495668 main.go:141] libmachine: Parsing certificate...
	I1009 19:43:11.259583  495668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-284447/.minikube/certs/cert.pem
	I1009 19:43:11.259607  495668 main.go:141] libmachine: Decoding PEM data...
	I1009 19:43:11.259619  495668 main.go:141] libmachine: Parsing certificate...
	I1009 19:43:11.260000  495668 cli_runner.go:164] Run: docker network inspect auto-224541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:43:11.276896  495668 cli_runner.go:211] docker network inspect auto-224541 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:43:11.276985  495668 network_create.go:284] running [docker network inspect auto-224541] to gather additional debugging logs...
	I1009 19:43:11.277004  495668 cli_runner.go:164] Run: docker network inspect auto-224541
	W1009 19:43:11.296308  495668 cli_runner.go:211] docker network inspect auto-224541 returned with exit code 1
	I1009 19:43:11.296335  495668 network_create.go:287] error running [docker network inspect auto-224541]: docker network inspect auto-224541: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-224541 not found
	I1009 19:43:11.296361  495668 network_create.go:289] output of [docker network inspect auto-224541]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-224541 not found
	
	** /stderr **
	I1009 19:43:11.296461  495668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:43:11.313176  495668 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
	I1009 19:43:11.313531  495668 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-be6108e3b570 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:36:2c:76:ba:fa} reservation:<nil>}
	I1009 19:43:11.313750  495668 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ad67ab1bb72 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:25:d9:6e:2a:ac} reservation:<nil>}
	I1009 19:43:11.314040  495668 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-86fdc851eb8a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:2e:ad:fa:a7:05} reservation:<nil>}
	I1009 19:43:11.314646  495668 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0d2c0}
	I1009 19:43:11.314673  495668 network_create.go:124] attempt to create docker network auto-224541 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 19:43:11.314735  495668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-224541 auto-224541
	I1009 19:43:11.382997  495668 network_create.go:108] docker network auto-224541 192.168.85.0/24 created
	I1009 19:43:11.383029  495668 kic.go:121] calculated static IP "192.168.85.2" for the "auto-224541" container
	I1009 19:43:11.383125  495668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:43:11.399926  495668 cli_runner.go:164] Run: docker volume create auto-224541 --label name.minikube.sigs.k8s.io=auto-224541 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:43:11.421821  495668 oci.go:103] Successfully created a docker volume auto-224541
	I1009 19:43:11.421907  495668 cli_runner.go:164] Run: docker run --rm --name auto-224541-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-224541 --entrypoint /usr/bin/test -v auto-224541:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:43:11.951843  495668 oci.go:107] Successfully prepared a docker volume auto-224541
	I1009 19:43:11.951933  495668 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:11.951949  495668 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:43:11.952024  495668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-224541:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.382676562Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.387433798Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.387592397Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.387664644Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.395551624Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.395587194Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.395610595Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.401692696Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.401839718Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.401925241Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.407570077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:43:05 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:05.407612367Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.624696932Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2fde7b6a-f61b-4453-bcb6-a91739a0c233 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.626265886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=29c9f748-5b49-42f7-917f-bfdb0d81e152 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.627409564Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper" id=966cb053-7123-4d9d-8055-d0d74340ae43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.627610174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.63983996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.640566829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.666529017Z" level=info msg="Created container f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper" id=966cb053-7123-4d9d-8055-d0d74340ae43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.668660892Z" level=info msg="Starting container: f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12" id=37ace9fa-6f41-44de-b4a0-215731e5ba01 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:43:12 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:12.671115773Z" level=info msg="Started container" PID=1726 containerID=f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper id=37ace9fa-6f41-44de-b4a0-215731e5ba01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=267213d97cdf6f5f7b49d2c2db2bc1b5bd0d671f82f6185f3855e7626f38d0d0
	Oct 09 19:43:12 default-k8s-diff-port-661639 conmon[1724]: conmon f99a0322f89ddcd05f66 <ninfo>: container 1726 exited with status 1
	Oct 09 19:43:13 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:13.249864056Z" level=info msg="Removing container: 66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a" id=80d09538-0b55-4758-af9f-6b28ae139bed name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:43:13 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:13.270535169Z" level=info msg="Error loading conmon cgroup of container 66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a: cgroup deleted" id=80d09538-0b55-4758-af9f-6b28ae139bed name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:43:13 default-k8s-diff-port-661639 crio[650]: time="2025-10-09T19:43:13.275650678Z" level=info msg="Removed container 66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq/dashboard-metrics-scraper" id=80d09538-0b55-4758-af9f-6b28ae139bed name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f99a0322f89dd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   267213d97cdf6       dashboard-metrics-scraper-6ffb444bf9-cvwrq             kubernetes-dashboard
	6cbe6c67db439       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   df7684b3cd285       storage-provisioner                                    kube-system
	f3f7bcb0354a5       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   8ab5b80a13ab3       kubernetes-dashboard-855c9754f9-zdn2m                  kubernetes-dashboard
	35a1d9f668e7d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   48e73eb4a9518       busybox                                                default
	6404f07f9d160       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   df7684b3cd285       storage-provisioner                                    kube-system
	28c9d8313df30       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   2867e5479d1c2       kube-proxy-8nqdl                                       kube-system
	3004c804b31e0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   99cc4cc892e1d       kindnet-29w5k                                          kube-system
	5e41ebd074984       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   da69533526412       coredns-66bc5c9577-xmz2b                               kube-system
	61a4c1355e914       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   fe62ec7037c31       kube-controller-manager-default-k8s-diff-port-661639   kube-system
	768ba8a2857f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   4a75f48153c80       etcd-default-k8s-diff-port-661639                      kube-system
	fdf381ba0047c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   0cc8c9caedb01       kube-apiserver-default-k8s-diff-port-661639            kube-system
	098c492e4b1b7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   932bb35ae6693       kube-scheduler-default-k8s-diff-port-661639            kube-system
	
	
	==> coredns [5e41ebd074984c566e8b2d974f6ba815a8ade1b0fe66ca6de30500fa900fc1f8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38358 - 31736 "HINFO IN 5600871389416133094.1330390292693617133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034317052s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-661639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-661639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3c7d29676816cc8f16f9f530aa17be871ed6bb50
	                    minikube.k8s.io/name=default-k8s-diff-port-661639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_40_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:40:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-661639
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:43:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:40:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:42:54 +0000   Thu, 09 Oct 2025 19:41:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-661639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 522e28c9acf042da9c57e8cd1d193dc7
	  System UUID:                7c98678a-bd01-4444-9c47-8681509e122a
	  Boot ID:                    91f15668-4eb6-46d6-b814-9362de61c3ca
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-xmz2b                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-661639                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-29w5k                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-661639             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-661639    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-8nqdl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-661639             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cvwrq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zdn2m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m40s)  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-661639 event: Registered Node default-k8s-diff-port-661639 in Controller
	  Normal   NodeReady                100s                   kubelet          Node default-k8s-diff-port-661639 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-661639 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-661639 event: Registered Node default-k8s-diff-port-661639 in Controller
	
	
	==> dmesg <==
	[  +9.948658] overlayfs: idmapped layers are currently not supported
	[ +45.467454] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:15] overlayfs: idmapped layers are currently not supported
	[ +24.792467] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:17] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[ +29.016002] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[  +1.517397] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[  +7.271012] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[  +1.599171] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[  +9.958825] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:42] overlayfs: idmapped layers are currently not supported
	[  +3.815530] overlayfs: idmapped layers are currently not supported
	[ +37.476110] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [768ba8a2857f78e85519fbea6febfc8bd4969620ca951c7d260ada4b7c79e0d0] <==
	{"level":"warn","ts":"2025-10-09T19:42:19.784678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:19.855324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:19.903620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:19.956285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.036713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.082751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.227743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.262451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.298946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.353943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.410498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.446243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.511307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.534643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.577658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.650194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.686106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.729904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.784506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.823981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.884509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:20.935299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:21.030329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:21.047400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:42:21.222979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:20 up  2:25,  0 user,  load average: 3.86, 3.36, 2.61
	Linux default-k8s-diff-port-661639 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3004c804b31e0d8773c58d46c285b0e2e8a522ccfabf81c57f67008d4414bcd6] <==
	I1009 19:42:25.117523       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:42:25.117914       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 19:42:25.118249       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:42:25.118298       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:42:25.118310       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:42:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:42:25.372862       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:42:25.372905       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:42:25.372916       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:42:25.379348       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 19:42:55.373164       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:42:55.373279       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 19:42:55.375592       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:42:55.375691       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 19:42:56.673666       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:42:56.673702       1 metrics.go:72] Registering metrics
	I1009 19:42:56.673754       1 controller.go:711] "Syncing nftables rules"
	I1009 19:43:05.377334       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:43:05.377417       1 main.go:301] handling current node
	I1009 19:43:15.372437       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:43:15.372475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fdf381ba0047cf30aba12aa77f6c2451060e006b9680d6c86f071cb8a93a48aa] <==
	I1009 19:42:23.965929       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 19:42:23.965968       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:42:23.984308       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 19:42:23.984607       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:42:23.984962       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:42:23.985026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:42:24.035465       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:42:24.035891       1 aggregator.go:171] initial CRD sync complete...
	I1009 19:42:24.035943       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:42:24.035975       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:42:24.036002       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:42:24.062269       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 19:42:24.081881       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:42:24.178970       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:42:24.214836       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 19:42:24.225670       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1009 19:42:24.230568       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:42:24.748261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:42:25.008329       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:42:25.086875       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:42:25.398703       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.185.147"}
	I1009 19:42:25.523024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.138.250"}
	I1009 19:42:28.536793       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:42:28.805446       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:42:28.859149       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [61a4c1355e9141371be19936476667f45feaf5cb8cb543e4b20e6dca262e451c] <==
	I1009 19:42:28.473032       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:42:28.473074       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 19:42:28.484345       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:42:28.484567       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 19:42:28.484722       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:42:28.484900       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-661639"
	I1009 19:42:28.485039       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:42:28.498333       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:42:28.498299       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:42:28.498503       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:42:28.498546       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:42:28.501398       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:42:28.502027       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:42:28.502417       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:42:28.502611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:42:28.502654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:42:28.502678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:42:28.502706       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:42:28.511841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:42:28.520838       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:42:28.546890       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 19:42:28.548192       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:42:28.555597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:42:28.555623       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:42:28.555630       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [28c9d8313df30cc8a4b34901f3b83b589a35e68d03fda99ea2845163cabc4713] <==
	I1009 19:42:25.685049       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:42:25.955181       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:42:26.058757       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:42:26.059726       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 19:42:26.059938       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:42:26.116899       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:42:26.117023       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:42:26.124961       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:42:26.125310       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:42:26.125335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:26.126712       1 config.go:200] "Starting service config controller"
	I1009 19:42:26.126739       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:42:26.126755       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:42:26.126759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:42:26.126773       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:42:26.126777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:42:26.127753       1 config.go:309] "Starting node config controller"
	I1009 19:42:26.127832       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:42:26.127862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:42:26.227619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:42:26.227633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:42:26.227663       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [098c492e4b1b7624dacdb34909738100a576f8ba91c34e3d4554ab1dd15c385a] <==
	I1009 19:42:20.743011       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:42:25.544677       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:42:25.544708       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:25.587961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:42:25.588114       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:25.589078       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:25.588082       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:42:25.589221       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:42:25.588127       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:42:25.601926       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:42:25.588140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:42:25.689204       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:42:25.689325       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:42:25.707205       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:42:29 default-k8s-diff-port-661639 kubelet[778]: W1009 19:42:29.403014     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/09130103b04fd8fd25c9744805cc50907295d088f1044db8ad9257493450f438/crio-8ab5b80a13ab390d5a7058c9ecd12c325390cf21ce548bf208c725b1401a4257 WatchSource:0}: Error finding container 8ab5b80a13ab390d5a7058c9ecd12c325390cf21ce548bf208c725b1401a4257: Status 404 returned error can't find the container with id 8ab5b80a13ab390d5a7058c9ecd12c325390cf21ce548bf208c725b1401a4257
	Oct 09 19:42:29 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:29.806895     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 09 19:42:36 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:36.118706     778 scope.go:117] "RemoveContainer" containerID="b51f6314a1d6036b6895183f47828a41817c274e7e796a5bda9004885c51bf71"
	Oct 09 19:42:37 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:37.123131     778 scope.go:117] "RemoveContainer" containerID="b51f6314a1d6036b6895183f47828a41817c274e7e796a5bda9004885c51bf71"
	Oct 09 19:42:37 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:37.133488     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:37 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:37.133821     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:38 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:38.127758     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:38 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:38.127931     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:39 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:39.332903     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:39 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:39.333119     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:50 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:50.623229     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:51.172307     778 scope.go:117] "RemoveContainer" containerID="78b3b61377663b6a6981494866359305baaee9a9d182f81e6e6bd7084a8bb353"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:51.172679     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:51.172966     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:42:51 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:51.206349     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zdn2m" podStartSLOduration=12.822849499 podStartE2EDuration="23.206320941s" podCreationTimestamp="2025-10-09 19:42:28 +0000 UTC" firstStartedPulling="2025-10-09 19:42:29.410815864 +0000 UTC m=+14.244198341" lastFinishedPulling="2025-10-09 19:42:39.794287305 +0000 UTC m=+24.627669783" observedRunningTime="2025-10-09 19:42:40.152432116 +0000 UTC m=+24.985814594" watchObservedRunningTime="2025-10-09 19:42:51.206320941 +0000 UTC m=+36.039703427"
	Oct 09 19:42:56 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:56.188383     778 scope.go:117] "RemoveContainer" containerID="6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7"
	Oct 09 19:42:59 default-k8s-diff-port-661639 kubelet[778]: I1009 19:42:59.333426     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:42:59 default-k8s-diff-port-661639 kubelet[778]: E1009 19:42:59.333608     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:43:12 default-k8s-diff-port-661639 kubelet[778]: I1009 19:43:12.623646     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:43:13 default-k8s-diff-port-661639 kubelet[778]: I1009 19:43:13.236553     778 scope.go:117] "RemoveContainer" containerID="66a2a89fefc3ede9ec49b6658fd03b65dc7f050b338f0eb70bc675d14c6c482a"
	Oct 09 19:43:13 default-k8s-diff-port-661639 kubelet[778]: I1009 19:43:13.236840     778 scope.go:117] "RemoveContainer" containerID="f99a0322f89ddcd05f662407287e026f441072c26a94843024b6d45f7c1e3b12"
	Oct 09 19:43:13 default-k8s-diff-port-661639 kubelet[778]: E1009 19:43:13.236997     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cvwrq_kubernetes-dashboard(55e38f72-bd20-4b73-bc1b-11a540167986)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cvwrq" podUID="55e38f72-bd20-4b73-bc1b-11a540167986"
	Oct 09 19:43:14 default-k8s-diff-port-661639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 19:43:14 default-k8s-diff-port-661639 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 19:43:14 default-k8s-diff-port-661639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f3f7bcb0354a582c2e24deff0f77a6efa7f0bb494ebd1df8e4c28627cd72ba19] <==
	2025/10/09 19:42:39 Using namespace: kubernetes-dashboard
	2025/10/09 19:42:39 Using in-cluster config to connect to apiserver
	2025/10/09 19:42:39 Using secret token for csrf signing
	2025/10/09 19:42:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 19:42:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 19:42:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 19:42:39 Generating JWE encryption key
	2025/10/09 19:42:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 19:42:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 19:42:40 Initializing JWE encryption key from synchronized object
	2025/10/09 19:42:40 Creating in-cluster Sidecar client
	2025/10/09 19:42:40 Serving insecurely on HTTP port: 9090
	2025/10/09 19:42:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:43:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 19:42:39 Starting overwatch
	
	
	==> storage-provisioner [6404f07f9d160418524590fb317abefb678e64e1458895af50b2050a46f7fcf7] <==
	I1009 19:42:25.219897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:42:55.224482       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6cbe6c67db4398fd97cd8a706d13b1c09be849299e98a9105cec8cf358e5cad5] <==
	I1009 19:42:56.283667       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:42:56.303712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:42:56.303840       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:42:56.306094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:42:59.762020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:04.022528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:07.628566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:10.682207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:13.706610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:13.722415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:43:13.722596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:43:13.722797       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661639_523d3490-97e2-4084-8e87-7cd89664ca5b!
	I1009 19:43:13.724005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36623463-521a-4e44-abb0-3a458f21ddd5", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-661639_523d3490-97e2-4084-8e87-7cd89664ca5b became leader
	W1009 19:43:13.746174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:13.758846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:43:13.824059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-661639_523d3490-97e2-4084-8e87-7cd89664ca5b!
	W1009 19:43:15.762728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:15.778380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:17.792646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:17.810034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:19.818926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:43:19.836948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639: exit status 2 (412.178194ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.07s)
E1009 19:49:27.198442  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.235620  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.241973  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.253311  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.274647  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.315993  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.397517  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.558968  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:38.880616  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:39.522586  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:40.804373  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:43.366460  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:48.488082  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.213607  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.220116  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.232292  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.253661  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.295719  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.377144  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.538471  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:51.859758  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:52.501958  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:53.784207  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:56.345531  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:58.730218  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:01.467379  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:11.709283  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:19.212552  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/auto-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 40.18
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 37.65
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 176.58
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 8.84
48 TestAddons/StoppedEnableDisable 12.21
49 TestCertOptions 35.62
50 TestCertExpiration 237.23
59 TestErrorSpam/setup 32.17
60 TestErrorSpam/start 0.8
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 5.6
63 TestErrorSpam/unpause 5.76
64 TestErrorSpam/stop 1.42
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 49.53
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 25.2
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
76 TestFunctional/serial/CacheCmd/cache/add_local 1.19
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 35.3
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.59
87 TestFunctional/serial/LogsFileCmd 1.55
88 TestFunctional/serial/InvalidService 4.19
90 TestFunctional/parallel/ConfigCmd 0.49
91 TestFunctional/parallel/DashboardCmd 7.67
92 TestFunctional/parallel/DryRun 0.45
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.06
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 26.34
102 TestFunctional/parallel/SSHCmd 0.55
103 TestFunctional/parallel/CpCmd 2
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 2.24
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
114 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 1.2
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.82
122 TestFunctional/parallel/ImageCommands/Setup 0.65
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ServiceCmd/List 0.51
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
151 TestFunctional/parallel/ProfileCmd/profile_list 0.42
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
153 TestFunctional/parallel/MountCmd/any-port 8.19
154 TestFunctional/parallel/MountCmd/specific-port 2
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 211.88
164 TestMultiControlPlane/serial/DeployApp 6.51
165 TestMultiControlPlane/serial/PingHostFromPods 1.51
166 TestMultiControlPlane/serial/AddWorkerNode 60.77
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
169 TestMultiControlPlane/serial/CopyFile 20.19
170 TestMultiControlPlane/serial/StopSecondaryNode 12.75
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 29.2
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.3
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.57
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.09
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
177 TestMultiControlPlane/serial/StopCluster 35.73
178 TestMultiControlPlane/serial/RestartCluster 160.81
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
180 TestMultiControlPlane/serial/AddSecondaryNode 81.7
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.14
185 TestJSONOutput/start/Command 81.49
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.77
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 70.58
211 TestKicCustomNetwork/use_default_bridge_network 36.64
212 TestKicExistingNetwork 36.74
213 TestKicCustomSubnet 38.46
214 TestKicStaticIP 36.74
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 73.75
219 TestMountStart/serial/StartWithMountFirst 8.72
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 9.39
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.23
226 TestMountStart/serial/RestartStopped 8.12
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 137.11
231 TestMultiNode/serial/DeployApp2Nodes 4.97
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 57.58
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.77
236 TestMultiNode/serial/CopyFile 10.71
237 TestMultiNode/serial/StopNode 2.36
238 TestMultiNode/serial/StartAfterStop 8.62
239 TestMultiNode/serial/RestartKeepsNodes 75.83
240 TestMultiNode/serial/DeleteNode 5.61
241 TestMultiNode/serial/StopMultiNode 23.77
242 TestMultiNode/serial/RestartMultiNode 51.18
243 TestMultiNode/serial/ValidateNameConflict 37.36
248 TestPreload 153.14
250 TestScheduledStopUnix 107.88
253 TestInsufficientStorage 13.94
254 TestRunningBinaryUpgrade 65.27
256 TestKubernetesUpgrade 163.83
257 TestMissingContainerUpgrade 129.92
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 40.87
261 TestNoKubernetes/serial/StartWithStopK8s 46.96
262 TestNoKubernetes/serial/Start 5.8
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 0.68
266 TestNoKubernetes/serial/Stop 1.22
267 TestNoKubernetes/serial/StartNoArgs 8.49
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
269 TestStoppedBinaryUpgrade/Setup 8.85
270 TestStoppedBinaryUpgrade/Upgrade 64.08
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
280 TestPause/serial/Start 84.89
281 TestPause/serial/SecondStartNoReconfiguration 24.6
290 TestNetworkPlugins/group/false 3.65
295 TestStartStop/group/old-k8s-version/serial/FirstStart 63.65
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.48
298 TestStartStop/group/old-k8s-version/serial/Stop 11.95
300 TestStartStop/group/no-preload/serial/FirstStart 76.97
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
302 TestStartStop/group/old-k8s-version/serial/SecondStart 58.95
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
304 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
305 TestStartStop/group/no-preload/serial/DeployApp 11.4
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
310 TestStartStop/group/embed-certs/serial/FirstStart 84.64
311 TestStartStop/group/no-preload/serial/Stop 12.08
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
313 TestStartStop/group/no-preload/serial/SecondStart 62.51
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/embed-certs/serial/DeployApp 8.41
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/embed-certs/serial/Stop 11.93
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.33
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 63.82
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
331 TestStartStop/group/newest-cni/serial/FirstStart 42.35
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.88
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.25
336 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/Stop 12.26
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 16.24
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
347 TestNetworkPlugins/group/auto/Start 86.66
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
350 TestNetworkPlugins/group/kindnet/Start 86.41
351 TestNetworkPlugins/group/auto/KubeletFlags 0.34
352 TestNetworkPlugins/group/auto/NetCatPod 10.27
353 TestNetworkPlugins/group/auto/DNS 0.16
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.4
359 TestNetworkPlugins/group/kindnet/DNS 0.21
360 TestNetworkPlugins/group/kindnet/Localhost 0.23
361 TestNetworkPlugins/group/kindnet/HairPin 0.16
362 TestNetworkPlugins/group/calico/Start 76.69
363 TestNetworkPlugins/group/custom-flannel/Start 64.56
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.29
366 TestNetworkPlugins/group/calico/NetCatPod 11.25
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
369 TestNetworkPlugins/group/calico/DNS 0.18
370 TestNetworkPlugins/group/calico/Localhost 0.15
371 TestNetworkPlugins/group/calico/HairPin 0.16
372 TestNetworkPlugins/group/custom-flannel/DNS 0.17
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/enable-default-cni/Start 84.32
376 TestNetworkPlugins/group/flannel/Start 70.05
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
379 TestNetworkPlugins/group/flannel/NetCatPod 11.27
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
382 TestNetworkPlugins/group/flannel/DNS 0.17
383 TestNetworkPlugins/group/flannel/Localhost 0.13
384 TestNetworkPlugins/group/flannel/HairPin 0.18
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
388 TestNetworkPlugins/group/bridge/Start 73.62
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 10.25
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (40.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-800425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-800425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (40.17751729s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (40.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 18:27:36.404770  286309 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1009 18:27:36.404854  286309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-800425
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-800425: exit status 85 (91.298504ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-800425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-800425 │ jenkins │ v1.37.0 │ 09 Oct 25 18:26 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:26:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:26:56.275639  286314 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:26:56.275785  286314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:26:56.275797  286314 out.go:374] Setting ErrFile to fd 2...
	I1009 18:26:56.275803  286314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:26:56.276065  286314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	W1009 18:26:56.276214  286314 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-284447/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-284447/.minikube/config/config.json: no such file or directory
	I1009 18:26:56.276619  286314 out.go:368] Setting JSON to true
	I1009 18:26:56.277456  286314 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4168,"bootTime":1760030249,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:26:56.277525  286314 start.go:141] virtualization:  
	I1009 18:26:56.281516  286314 out.go:99] [download-only-800425] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1009 18:26:56.281735  286314 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:26:56.281854  286314 notify.go:220] Checking for updates...
	I1009 18:26:56.284864  286314 out.go:171] MINIKUBE_LOCATION=21139
	I1009 18:26:56.287938  286314 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:26:56.290993  286314 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:26:56.293902  286314 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:26:56.296839  286314 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:26:56.302703  286314 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:26:56.302984  286314 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:26:56.328307  286314 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:26:56.328419  286314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:26:56.388535  286314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-09 18:26:56.379661042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:26:56.388648  286314 docker.go:318] overlay module found
	I1009 18:26:56.391567  286314 out.go:99] Using the docker driver based on user configuration
	I1009 18:26:56.391613  286314 start.go:305] selected driver: docker
	I1009 18:26:56.391628  286314 start.go:925] validating driver "docker" against <nil>
	I1009 18:26:56.391726  286314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:26:56.450973  286314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-09 18:26:56.440921774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:26:56.451124  286314 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:26:56.451392  286314 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:26:56.451543  286314 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:26:56.454675  286314 out.go:171] Using Docker driver with root privileges
	I1009 18:26:56.457639  286314 cni.go:84] Creating CNI manager for ""
	I1009 18:26:56.457721  286314 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:26:56.457735  286314 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:26:56.457815  286314 start.go:349] cluster config:
	{Name:download-only-800425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-800425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:26:56.460723  286314 out.go:99] Starting "download-only-800425" primary control-plane node in "download-only-800425" cluster
	I1009 18:26:56.460746  286314 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:26:56.463529  286314 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:26:56.463572  286314 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:26:56.463728  286314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:26:56.479790  286314 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:26:56.480009  286314 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:26:56.480122  286314 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:26:56.521431  286314 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 18:26:56.521460  286314 cache.go:64] Caching tarball of preloaded images
	I1009 18:26:56.521629  286314 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:26:56.524976  286314 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 18:26:56.525010  286314 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1009 18:26:56.607505  286314 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1009 18:26:56.607638  286314 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 18:27:02.020408  286314 cache.go:165] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 18:27:35.717017  286314 cache.go:67] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 18:27:35.717449  286314 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/download-only-800425/config.json ...
	I1009 18:27:35.717490  286314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/download-only-800425/config.json: {Name:mk054a6d793e46330afb463809c6df0df0590ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:27:35.718327  286314 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:27:35.718523  286314 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-800425 host does not exist
	  To start a cluster, run: "minikube start -p download-only-800425"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-800425
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (37.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-958806 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-958806 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (37.645859568s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (37.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 18:28:14.483148  286309 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 18:28:14.483184  286309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-958806
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-958806: exit status 85 (97.121525ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-800425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-800425 │ jenkins │ v1.37.0 │ 09 Oct 25 18:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ delete  │ -p download-only-800425                                                                                                                                                   │ download-only-800425 │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │ 09 Oct 25 18:27 UTC │
	│ start   │ -o=json --download-only -p download-only-958806 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-958806 │ jenkins │ v1.37.0 │ 09 Oct 25 18:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:27:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:27:36.886871  286517 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:27:36.887073  286517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:27:36.887103  286517 out.go:374] Setting ErrFile to fd 2...
	I1009 18:27:36.887124  286517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:27:36.887408  286517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:27:36.887848  286517 out.go:368] Setting JSON to true
	I1009 18:27:36.888703  286517 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4208,"bootTime":1760030249,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:27:36.888795  286517 start.go:141] virtualization:  
	I1009 18:27:36.892087  286517 out.go:99] [download-only-958806] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 18:27:36.892307  286517 notify.go:220] Checking for updates...
	I1009 18:27:36.895265  286517 out.go:171] MINIKUBE_LOCATION=21139
	I1009 18:27:36.898289  286517 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:27:36.901172  286517 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:27:36.904066  286517 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:27:36.906807  286517 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:27:36.912385  286517 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:27:36.912643  286517 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:27:36.946412  286517 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:27:36.946531  286517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:27:37.004723  286517 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 18:27:36.995274234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:27:37.004833  286517 docker.go:318] overlay module found
	I1009 18:27:37.007825  286517 out.go:99] Using the docker driver based on user configuration
	I1009 18:27:37.007869  286517 start.go:305] selected driver: docker
	I1009 18:27:37.007885  286517 start.go:925] validating driver "docker" against <nil>
	I1009 18:27:37.007990  286517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:27:37.066236  286517 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 18:27:37.057428911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:27:37.066405  286517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:27:37.066692  286517 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:27:37.066849  286517 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:27:37.069798  286517 out.go:171] Using Docker driver with root privileges
	I1009 18:27:37.072636  286517 cni.go:84] Creating CNI manager for ""
	I1009 18:27:37.072718  286517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:27:37.072735  286517 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:27:37.072809  286517 start.go:349] cluster config:
	{Name:download-only-958806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-958806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:27:37.075912  286517 out.go:99] Starting "download-only-958806" primary control-plane node in "download-only-958806" cluster
	I1009 18:27:37.075950  286517 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:27:37.078891  286517 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:27:37.078939  286517 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:27:37.079140  286517 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:27:37.095345  286517 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:27:37.095473  286517 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:27:37.095504  286517 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 18:27:37.095509  286517 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 18:27:37.095516  286517 cache.go:165] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 18:27:37.129665  286517 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:27:37.129696  286517 cache.go:64] Caching tarball of preloaded images
	I1009 18:27:37.129877  286517 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:27:37.132978  286517 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1009 18:27:37.133033  286517 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1009 18:27:37.221585  286517 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1009 18:27:37.221637  286517 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21139-284447/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-958806 host does not exist
	  To start a cluster, run: "minikube start -p download-only-958806"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-958806
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:28:15.652042  286309 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-572714 --alsologtostderr --binary-mirror http://127.0.0.1:37233 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-572714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-572714
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-419518
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-419518: exit status 85 (66.80901ms)

                                                
                                                
-- stdout --
	* Profile "addons-419518" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-419518"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-419518
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-419518: exit status 85 (81.182541ms)

                                                
                                                
-- stdout --
	* Profile "addons-419518" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-419518"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (176.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-419518 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-419518 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m56.575814199s)
--- PASS: TestAddons/Setup (176.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-419518 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-419518 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-419518 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-419518 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f5208359-5ee5-4bec-9305-f89953e59ed6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f5208359-5ee5-4bec-9305-f89953e59ed6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003969461s
addons_test.go:694: (dbg) Run:  kubectl --context addons-419518 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-419518 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-419518 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-419518 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-419518
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-419518: (11.908361027s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-419518
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-419518
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-419518
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (35.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-983220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.798275427s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-983220 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-983220 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-983220 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-983220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-983220
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-983220: (2.068930001s)
--- PASS: TestCertOptions (35.62s)

                                                
                                    
x
+
TestCertExpiration (237.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-259172 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.462877419s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-259172 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.360680678s)
helpers_test.go:175: Cleaning up "cert-expiration-259172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-259172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-259172: (2.401842818s)
--- PASS: TestCertExpiration (237.23s)

                                                
                                    
x
+
TestErrorSpam/setup (32.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-016523 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-016523 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-016523 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-016523 --driver=docker  --container-runtime=crio: (32.174577133s)
--- PASS: TestErrorSpam/setup (32.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (5.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause: exit status 80 (1.950652272s)

                                                
                                                
-- stdout --
	* Pausing node nospam-016523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:35:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause: exit status 80 (1.623977961s)

                                                
                                                
-- stdout --
	* Pausing node nospam-016523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause: exit status 80 (2.018742585s)

                                                
                                                
-- stdout --
	* Pausing node nospam-016523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:35:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause: exit status 80 (2.182501799s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-016523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:35:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause: exit status 80 (1.756638613s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-016523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:35:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause: exit status 80 (1.815699166s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-016523 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T18:35:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 stop: (1.22102977s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-016523 --log_dir /tmp/nospam-016523 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-284447/.minikube/files/etc/test/nested/copy/286309/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-141121 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1009 18:36:14.055427  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.062254  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.073596  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.094858  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.136176  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.217601  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.379051  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:14.700719  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:15.342329  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:16.624682  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-141121 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.531928571s)
--- PASS: TestFunctional/serial/StartWithProxy (49.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 18:36:16.714470  286309 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-141121 --alsologtostderr -v=8
E1009 18:36:19.186893  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:24.309311  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:36:34.551272  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-141121 --alsologtostderr -v=8: (25.191318226s)
functional_test.go:678: soft start took 25.197671959s for "functional-141121" cluster.
I1009 18:36:41.906168  286309 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (25.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-141121 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 cache add registry.k8s.io/pause:3.1: (1.160531427s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 cache add registry.k8s.io/pause:3.3: (1.142322568s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 cache add registry.k8s.io/pause:latest: (1.24127276s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-141121 /tmp/TestFunctionalserialCacheCmdcacheadd_local1397939412/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cache add minikube-local-cache-test:functional-141121
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cache delete minikube-local-cache-test:functional-141121
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-141121
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.468596ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 kubectl -- --context functional-141121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-141121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-141121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 18:36:55.032753  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-141121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.299366018s)
functional_test.go:776: restart took 35.299458605s for "functional-141121" cluster.
I1009 18:37:24.756043  286309 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (35.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-141121 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 logs: (1.593098631s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 logs --file /tmp/TestFunctionalserialLogsFileCmd1759450031/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 logs --file /tmp/TestFunctionalserialLogsFileCmd1759450031/001/logs.txt: (1.551737073s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-141121 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-141121
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-141121: exit status 115 (390.025864ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30391 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-141121 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 config get cpus: exit status 14 (73.57671ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 config get cpus: exit status 14 (73.234586ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-141121 --alsologtostderr -v=1]
2025/10/09 18:48:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-141121 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 313880: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-141121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-141121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.621871ms)

                                                
                                                
-- stdout --
	* [functional-141121] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:47:57.423464  313581 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:47:57.423665  313581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:57.423678  313581 out.go:374] Setting ErrFile to fd 2...
	I1009 18:47:57.423684  313581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:57.424295  313581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:47:57.425021  313581 out.go:368] Setting JSON to false
	I1009 18:47:57.425895  313581 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5429,"bootTime":1760030249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:47:57.426005  313581 start.go:141] virtualization:  
	I1009 18:47:57.429603  313581 out.go:179] * [functional-141121] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 18:47:57.433425  313581 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:47:57.433494  313581 notify.go:220] Checking for updates...
	I1009 18:47:57.436674  313581 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:47:57.439631  313581 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:47:57.442413  313581 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:47:57.445225  313581 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:47:57.448191  313581 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:47:57.452863  313581 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:47:57.453808  313581 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:47:57.491202  313581 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:47:57.491333  313581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:47:57.551079  313581 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 18:47:57.541254789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:47:57.551187  313581 docker.go:318] overlay module found
	I1009 18:47:57.554320  313581 out.go:179] * Using the docker driver based on existing profile
	I1009 18:47:57.557169  313581 start.go:305] selected driver: docker
	I1009 18:47:57.557191  313581 start.go:925] validating driver "docker" against &{Name:functional-141121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-141121 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:57.557297  313581 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:47:57.560784  313581 out.go:203] 
	W1009 18:47:57.563698  313581 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 18:47:57.566558  313581 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-141121 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-141121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-141121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.442312ms)

                                                
                                                
-- stdout --
	* [functional-141121] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:47:57.881283  313702 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:47:57.881480  313702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:57.881494  313702 out.go:374] Setting ErrFile to fd 2...
	I1009 18:47:57.881499  313702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:57.883084  313702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:47:57.883588  313702 out.go:368] Setting JSON to false
	I1009 18:47:57.884723  313702 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5429,"bootTime":1760030249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 18:47:57.884796  313702 start.go:141] virtualization:  
	I1009 18:47:57.888094  313702 out.go:179] * [functional-141121] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1009 18:47:57.891844  313702 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:47:57.891921  313702 notify.go:220] Checking for updates...
	I1009 18:47:57.894797  313702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:47:57.897961  313702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 18:47:57.900737  313702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 18:47:57.903606  313702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:47:57.906462  313702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:47:57.909676  313702 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:47:57.910281  313702 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:47:57.940213  313702 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:47:57.940340  313702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:47:58.015181  313702 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 18:47:58.002903402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:47:58.015292  313702 docker.go:318] overlay module found
	I1009 18:47:58.018442  313702 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:47:58.021331  313702 start.go:305] selected driver: docker
	I1009 18:47:58.021354  313702 start.go:925] validating driver "docker" against &{Name:functional-141121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-141121 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:58.021473  313702 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:47:58.025077  313702 out.go:203] 
	W1009 18:47:58.028005  313702 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:47:58.031097  313702 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5692b9d6-8818-4d68-874a-fd1600731444] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003564823s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-141121 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-141121 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-141121 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-141121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6d284674-c49b-4bc1-9ad7-997a4885a6ee] Pending
helpers_test.go:352: "sp-pod" [6d284674-c49b-4bc1-9ad7-997a4885a6ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6d284674-c49b-4bc1-9ad7-997a4885a6ee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003059002s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-141121 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-141121 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-141121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2589ecc1-8604-46f5-ae17-047255fa5077] Pending
helpers_test.go:352: "sp-pod" [2589ecc1-8604-46f5-ae17-047255fa5077] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002865681s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-141121 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh -n functional-141121 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cp functional-141121:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2548743382/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh -n functional-141121 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh -n functional-141121 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/286309/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /etc/test/nested/copy/286309/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/286309.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /etc/ssl/certs/286309.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/286309.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /usr/share/ca-certificates/286309.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2863092.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /etc/ssl/certs/2863092.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2863092.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /usr/share/ca-certificates/2863092.pem"
E1009 18:37:35.995043  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-141121 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh "sudo systemctl is-active docker": exit status 1 (374.649692ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh "sudo systemctl is-active containerd": exit status 1 (342.814728ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 version -o=json --components: (1.194895241s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-141121 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-141121 image ls --format short --alsologtostderr:
I1009 18:48:07.477680  314247 out.go:360] Setting OutFile to fd 1 ...
I1009 18:48:07.478078  314247 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:07.478108  314247 out.go:374] Setting ErrFile to fd 2...
I1009 18:48:07.478160  314247 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:07.478526  314247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
I1009 18:48:07.479177  314247 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:07.479379  314247 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:07.479904  314247 cli_runner.go:164] Run: docker container inspect functional-141121 --format={{.State.Status}}
I1009 18:48:07.498425  314247 ssh_runner.go:195] Run: systemctl --version
I1009 18:48:07.498477  314247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141121
I1009 18:48:07.519779  314247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/functional-141121/id_rsa Username:docker}
I1009 18:48:07.622464  314247 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-141121 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-141121  │ 7f77262d24cbb │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-141121 image ls --format table --alsologtostderr:
I1009 18:48:11.998360  314726 out.go:360] Setting OutFile to fd 1 ...
I1009 18:48:11.998659  314726 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:11.998690  314726 out.go:374] Setting ErrFile to fd 2...
I1009 18:48:11.998766  314726 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:11.999056  314726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
I1009 18:48:11.999754  314726 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:11.999938  314726 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:12.000477  314726 cli_runner.go:164] Run: docker container inspect functional-141121 --format={{.State.Status}}
I1009 18:48:12.023708  314726 ssh_runner.go:195] Run: systemctl --version
I1009 18:48:12.023775  314726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141121
I1009 18:48:12.042727  314726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/functional-141121/id_rsa Username:docker}
I1009 18:48:12.153042  314726 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-141121 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1ddda
d8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7f77262d24cbb44bc9d436c60e7a4dd1c3cdc6f2f2a01feaef4a8bafff3bfac0","repoDigests":["localhost/my-image@sha256:dbb753e37b4293d9a63d0f5f49ea6c871cea87784b2a7c0b5decc769a41520cb"],"repoTags":["localhost/my-image:functional-141121"],"size":"1640788"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-sch
eduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2c0728424d75b9558e7268b837d37b0f644a0b6ab84f29577609a2d35c0e2926","repoDigests":["docker.io/library/848456b75d55042718a44a6cfd1b49818292d23fcfd4cfa35907b144db81fac5-tmp@sha256:d12a590dd519499d0075c47a59048e74b06051845924f47201b83fa708765a46"],"repoTags":[],"size":"1638179"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sh
a256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","re
poDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.
34.1"],"size":"75938711"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe
0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-141121 image ls --format json --alsologtostderr:
I1009 18:48:11.760707  314687 out.go:360] Setting OutFile to fd 1 ...
I1009 18:48:11.760887  314687 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:11.760900  314687 out.go:374] Setting ErrFile to fd 2...
I1009 18:48:11.760906  314687 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:11.761180  314687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
I1009 18:48:11.764765  314687 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:11.764975  314687 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:11.765596  314687 cli_runner.go:164] Run: docker container inspect functional-141121 --format={{.State.Status}}
I1009 18:48:11.784306  314687 ssh_runner.go:195] Run: systemctl --version
I1009 18:48:11.784371  314687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141121
I1009 18:48:11.809780  314687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/functional-141121/id_rsa Username:docker}
I1009 18:48:11.912924  314687 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-141121 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-141121 image ls --format yaml --alsologtostderr:
I1009 18:48:07.715583  314284 out.go:360] Setting OutFile to fd 1 ...
I1009 18:48:07.716110  314284 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:07.716128  314284 out.go:374] Setting ErrFile to fd 2...
I1009 18:48:07.716133  314284 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:07.716657  314284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
I1009 18:48:07.717276  314284 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:07.717401  314284 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:07.717845  314284 cli_runner.go:164] Run: docker container inspect functional-141121 --format={{.State.Status}}
I1009 18:48:07.735408  314284 ssh_runner.go:195] Run: systemctl --version
I1009 18:48:07.735459  314284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141121
I1009 18:48:07.752421  314284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/functional-141121/id_rsa Username:docker}
I1009 18:48:07.852497  314284 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh pgrep buildkitd: exit status 1 (276.655004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image build -t localhost/my-image:functional-141121 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-141121 image build -t localhost/my-image:functional-141121 testdata/build --alsologtostderr: (3.30943675s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-141121 image build -t localhost/my-image:functional-141121 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2c0728424d7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-141121
--> 7f77262d24c
Successfully tagged localhost/my-image:functional-141121
7f77262d24cbb44bc9d436c60e7a4dd1c3cdc6f2f2a01feaef4a8bafff3bfac0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-141121 image build -t localhost/my-image:functional-141121 testdata/build --alsologtostderr:
I1009 18:48:08.222473  314383 out.go:360] Setting OutFile to fd 1 ...
I1009 18:48:08.223501  314383 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:08.223527  314383 out.go:374] Setting ErrFile to fd 2...
I1009 18:48:08.223533  314383 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:48:08.223838  314383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
I1009 18:48:08.224565  314383 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:08.225244  314383 config.go:182] Loaded profile config "functional-141121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:48:08.225759  314383 cli_runner.go:164] Run: docker container inspect functional-141121 --format={{.State.Status}}
I1009 18:48:08.243778  314383 ssh_runner.go:195] Run: systemctl --version
I1009 18:48:08.243842  314383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141121
I1009 18:48:08.261199  314383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/functional-141121/id_rsa Username:docker}
I1009 18:48:08.364776  314383 build_images.go:161] Building image from path: /tmp/build.1361877373.tar
I1009 18:48:08.364852  314383 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 18:48:08.373383  314383 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1361877373.tar
I1009 18:48:08.377240  314383 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1361877373.tar: stat -c "%s %y" /var/lib/minikube/build/build.1361877373.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1361877373.tar': No such file or directory
I1009 18:48:08.377270  314383 ssh_runner.go:362] scp /tmp/build.1361877373.tar --> /var/lib/minikube/build/build.1361877373.tar (3072 bytes)
I1009 18:48:08.395360  314383 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1361877373
I1009 18:48:08.403048  314383 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1361877373 -xf /var/lib/minikube/build/build.1361877373.tar
I1009 18:48:08.411082  314383 crio.go:315] Building image: /var/lib/minikube/build/build.1361877373
I1009 18:48:08.411171  314383 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-141121 /var/lib/minikube/build/build.1361877373 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1009 18:48:11.447678  314383 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-141121 /var/lib/minikube/build/build.1361877373 --cgroup-manager=cgroupfs: (3.036476408s)
I1009 18:48:11.447761  314383 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1361877373
I1009 18:48:11.457652  314383 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1361877373.tar
I1009 18:48:11.466091  314383 build_images.go:217] Built localhost/my-image:functional-141121 from /tmp/build.1361877373.tar
I1009 18:48:11.466121  314383 build_images.go:133] succeeded building to: functional-141121
I1009 18:48:11.466155  314383 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-141121
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image rm kicbase/echo-server:functional-141121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-141121 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-141121 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-141121 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-141121 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 309927: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-141121 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-141121 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [3710b026-2207-4c4d-920b-0e5292acd627] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [3710b026-2207-4c4d-920b-0e5292acd627] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002978264s
I1009 18:37:50.758279  286309 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-141121 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.129.131 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-141121 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 service list -o json
functional_test.go:1504: Took "515.977039ms" to run "out/minikube-linux-arm64 -p functional-141121 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "364.693693ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.664116ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "402.719221ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "84.241289ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdany-port755496222/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760035665177474392" to /tmp/TestFunctionalparallelMountCmdany-port755496222/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760035665177474392" to /tmp/TestFunctionalparallelMountCmdany-port755496222/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760035665177474392" to /tmp/TestFunctionalparallelMountCmdany-port755496222/001/test-1760035665177474392
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (435.635686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:47:45.614496  286309 retry.go:31] will retry after 671.041284ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:47 test-1760035665177474392
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh cat /mount-9p/test-1760035665177474392
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-141121 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b2328d16-652c-45e1-805d-490d7382e085] Pending
helpers_test.go:352: "busybox-mount" [b2328d16-652c-45e1-805d-490d7382e085] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b2328d16-652c-45e1-805d-490d7382e085] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b2328d16-652c-45e1-805d-490d7382e085] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003223049s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-141121 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdany-port755496222/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdspecific-port3052980281/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.294654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:47:53.710038  286309 retry.go:31] will retry after 608.226185ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdspecific-port3052980281/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh "sudo umount -f /mount-9p": exit status 1 (286.809363ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-141121 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdspecific-port3052980281/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T" /mount1: exit status 1 (611.312413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:47:55.979086  286309 retry.go:31] will retry after 505.216228ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-141121 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-141121 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-141121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2136272216/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-141121
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-141121
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-141121
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 18:51:14.045856  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m30.983173544s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (211.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 kubectl -- rollout status deployment/busybox: (3.369441322s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-ggsd6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-n6ngh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-wv5jv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-ggsd6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-n6ngh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-wv5jv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-ggsd6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-n6ngh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-wv5jv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-ggsd6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-ggsd6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-n6ngh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-n6ngh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-wv5jv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 kubectl -- exec busybox-7b57f96db7-wv5jv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node add --alsologtostderr -v 5
E1009 18:52:37.120665  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.154476  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.160845  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.172156  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.193495  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.235016  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.316450  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.477937  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:37.799353  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:38.441144  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:39.722519  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:42.284149  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:47.405540  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:52:57.647412  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 node add --alsologtostderr -v 5: (59.733355105s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5: (1.038710064s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-305771 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.068545425s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 status --output json --alsologtostderr -v 5: (1.065912881s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp testdata/cp-test.txt ha-305771:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1664879529/001/cp-test_ha-305771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771:/home/docker/cp-test.txt ha-305771-m02:/home/docker/cp-test_ha-305771_ha-305771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test_ha-305771_ha-305771-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771:/home/docker/cp-test.txt ha-305771-m03:/home/docker/cp-test_ha-305771_ha-305771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test_ha-305771_ha-305771-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771:/home/docker/cp-test.txt ha-305771-m04:/home/docker/cp-test_ha-305771_ha-305771-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test_ha-305771_ha-305771-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp testdata/cp-test.txt ha-305771-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1664879529/001/cp-test_ha-305771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m02:/home/docker/cp-test.txt ha-305771:/home/docker/cp-test_ha-305771-m02_ha-305771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test_ha-305771-m02_ha-305771.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m02:/home/docker/cp-test.txt ha-305771-m03:/home/docker/cp-test_ha-305771-m02_ha-305771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test_ha-305771-m02_ha-305771-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m02:/home/docker/cp-test.txt ha-305771-m04:/home/docker/cp-test_ha-305771-m02_ha-305771-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test_ha-305771-m02_ha-305771-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp testdata/cp-test.txt ha-305771-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1664879529/001/cp-test_ha-305771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m03:/home/docker/cp-test.txt ha-305771:/home/docker/cp-test_ha-305771-m03_ha-305771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test.txt"
E1009 18:53:18.129379  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test_ha-305771-m03_ha-305771.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m03:/home/docker/cp-test.txt ha-305771-m02:/home/docker/cp-test_ha-305771-m03_ha-305771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test_ha-305771-m03_ha-305771-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m03:/home/docker/cp-test.txt ha-305771-m04:/home/docker/cp-test_ha-305771-m03_ha-305771-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test_ha-305771-m03_ha-305771-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp testdata/cp-test.txt ha-305771-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1664879529/001/cp-test_ha-305771-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m04:/home/docker/cp-test.txt ha-305771:/home/docker/cp-test_ha-305771-m04_ha-305771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771 "sudo cat /home/docker/cp-test_ha-305771-m04_ha-305771.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m04:/home/docker/cp-test.txt ha-305771-m02:/home/docker/cp-test_ha-305771-m04_ha-305771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m02 "sudo cat /home/docker/cp-test_ha-305771-m04_ha-305771-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 cp ha-305771-m04:/home/docker/cp-test.txt ha-305771-m03:/home/docker/cp-test_ha-305771-m04_ha-305771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 ssh -n ha-305771-m03 "sudo cat /home/docker/cp-test_ha-305771-m04_ha-305771-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 node stop m02 --alsologtostderr -v 5: (11.968513466s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5: exit status 7 (778.3716ms)

                                                
                                                
-- stdout --
	ha-305771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-305771-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-305771-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-305771-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:53:37.800757  329819 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:53:37.800953  329819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:53:37.800980  329819 out.go:374] Setting ErrFile to fd 2...
	I1009 18:53:37.801019  329819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:53:37.801899  329819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:53:37.802865  329819 out.go:368] Setting JSON to false
	I1009 18:53:37.802941  329819 mustload.go:65] Loading cluster: ha-305771
	I1009 18:53:37.803443  329819 config.go:182] Loaded profile config "ha-305771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:53:37.803520  329819 status.go:174] checking status of ha-305771 ...
	I1009 18:53:37.802996  329819 notify.go:220] Checking for updates...
	I1009 18:53:37.804878  329819 cli_runner.go:164] Run: docker container inspect ha-305771 --format={{.State.Status}}
	I1009 18:53:37.824591  329819 status.go:371] ha-305771 host status = "Running" (err=<nil>)
	I1009 18:53:37.824687  329819 host.go:66] Checking if "ha-305771" exists ...
	I1009 18:53:37.825106  329819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-305771
	I1009 18:53:37.850345  329819 host.go:66] Checking if "ha-305771" exists ...
	I1009 18:53:37.850656  329819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:53:37.850699  329819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-305771
	I1009 18:53:37.871208  329819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/ha-305771/id_rsa Username:docker}
	I1009 18:53:37.979822  329819 ssh_runner.go:195] Run: systemctl --version
	I1009 18:53:37.987469  329819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:53:38.001883  329819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:53:38.067664  329819 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-09 18:53:38.056136891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:53:38.068218  329819 kubeconfig.go:125] found "ha-305771" server: "https://192.168.49.254:8443"
	I1009 18:53:38.068251  329819 api_server.go:166] Checking apiserver status ...
	I1009 18:53:38.068300  329819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:53:38.081522  329819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1270/cgroup
	I1009 18:53:38.091082  329819 api_server.go:182] apiserver freezer: "12:freezer:/docker/38f37c574711bfbf08c0b6feefbc3d82a9071a8e798b5d3472e66271feed5369/crio/crio-7bf734f51f829d8b547dd6e904a00594be67efc6d636581e5175a7f19185d919"
	I1009 18:53:38.091205  329819 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/38f37c574711bfbf08c0b6feefbc3d82a9071a8e798b5d3472e66271feed5369/crio/crio-7bf734f51f829d8b547dd6e904a00594be67efc6d636581e5175a7f19185d919/freezer.state
	I1009 18:53:38.099282  329819 api_server.go:204] freezer state: "THAWED"
	I1009 18:53:38.099311  329819 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 18:53:38.108055  329819 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 18:53:38.108087  329819 status.go:463] ha-305771 apiserver status = Running (err=<nil>)
	I1009 18:53:38.108098  329819 status.go:176] ha-305771 status: &{Name:ha-305771 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:53:38.108115  329819 status.go:174] checking status of ha-305771-m02 ...
	I1009 18:53:38.108464  329819 cli_runner.go:164] Run: docker container inspect ha-305771-m02 --format={{.State.Status}}
	I1009 18:53:38.129896  329819 status.go:371] ha-305771-m02 host status = "Stopped" (err=<nil>)
	I1009 18:53:38.129919  329819 status.go:384] host is not running, skipping remaining checks
	I1009 18:53:38.129926  329819 status.go:176] ha-305771-m02 status: &{Name:ha-305771-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:53:38.129946  329819 status.go:174] checking status of ha-305771-m03 ...
	I1009 18:53:38.130409  329819 cli_runner.go:164] Run: docker container inspect ha-305771-m03 --format={{.State.Status}}
	I1009 18:53:38.149102  329819 status.go:371] ha-305771-m03 host status = "Running" (err=<nil>)
	I1009 18:53:38.149130  329819 host.go:66] Checking if "ha-305771-m03" exists ...
	I1009 18:53:38.149447  329819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-305771-m03
	I1009 18:53:38.169416  329819 host.go:66] Checking if "ha-305771-m03" exists ...
	I1009 18:53:38.169738  329819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:53:38.169787  329819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-305771-m03
	I1009 18:53:38.189352  329819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/ha-305771-m03/id_rsa Username:docker}
	I1009 18:53:38.292464  329819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:53:38.306076  329819 kubeconfig.go:125] found "ha-305771" server: "https://192.168.49.254:8443"
	I1009 18:53:38.306101  329819 api_server.go:166] Checking apiserver status ...
	I1009 18:53:38.306195  329819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:53:38.317919  329819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1175/cgroup
	I1009 18:53:38.326535  329819 api_server.go:182] apiserver freezer: "12:freezer:/docker/3c79e1b0af3356f32d8508c8c8d9f5652aced499b019460355939b1417c19fff/crio/crio-f37ed338a9805fdcfe535d2ec79e39c79bd0f549d694d65d926429fe393c9ca0"
	I1009 18:53:38.326663  329819 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3c79e1b0af3356f32d8508c8c8d9f5652aced499b019460355939b1417c19fff/crio/crio-f37ed338a9805fdcfe535d2ec79e39c79bd0f549d694d65d926429fe393c9ca0/freezer.state
	I1009 18:53:38.334587  329819 api_server.go:204] freezer state: "THAWED"
	I1009 18:53:38.334615  329819 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 18:53:38.343004  329819 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 18:53:38.343082  329819 status.go:463] ha-305771-m03 apiserver status = Running (err=<nil>)
	I1009 18:53:38.343107  329819 status.go:176] ha-305771-m03 status: &{Name:ha-305771-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:53:38.343155  329819 status.go:174] checking status of ha-305771-m04 ...
	I1009 18:53:38.343560  329819 cli_runner.go:164] Run: docker container inspect ha-305771-m04 --format={{.State.Status}}
	I1009 18:53:38.361856  329819 status.go:371] ha-305771-m04 host status = "Running" (err=<nil>)
	I1009 18:53:38.361884  329819 host.go:66] Checking if "ha-305771-m04" exists ...
	I1009 18:53:38.362234  329819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-305771-m04
	I1009 18:53:38.380113  329819 host.go:66] Checking if "ha-305771-m04" exists ...
	I1009 18:53:38.380505  329819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:53:38.380557  329819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-305771-m04
	I1009 18:53:38.402755  329819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/ha-305771-m04/id_rsa Username:docker}
	I1009 18:53:38.503571  329819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:53:38.517874  329819 status.go:176] ha-305771-m04 status: &{Name:ha-305771-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node start m02 --alsologtostderr -v 5
E1009 18:53:59.092881  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 node start m02 --alsologtostderr -v 5: (28.025189156s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5: (1.082383487s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.301646905s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 stop --alsologtostderr -v 5: (26.593050812s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 start --wait true --alsologtostderr -v 5
E1009 18:55:21.014640  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:14.045049  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 start --wait true --alsologtostderr -v 5: (1m40.785106274s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 node delete m03 --alsologtostderr -v 5: (11.135617489s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 stop --alsologtostderr -v 5: (35.604856445s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5: exit status 7 (120.361252ms)

                                                
                                                
-- stdout --
	ha-305771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-305771-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-305771-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:57:05.981234  341643 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:57:05.981431  341643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:57:05.981460  341643 out.go:374] Setting ErrFile to fd 2...
	I1009 18:57:05.981479  341643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:57:05.981786  341643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 18:57:05.982032  341643 out.go:368] Setting JSON to false
	I1009 18:57:05.982096  341643 mustload.go:65] Loading cluster: ha-305771
	I1009 18:57:05.982176  341643 notify.go:220] Checking for updates...
	I1009 18:57:05.982619  341643 config.go:182] Loaded profile config "ha-305771": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:57:05.982657  341643 status.go:174] checking status of ha-305771 ...
	I1009 18:57:05.983312  341643 cli_runner.go:164] Run: docker container inspect ha-305771 --format={{.State.Status}}
	I1009 18:57:06.002896  341643 status.go:371] ha-305771 host status = "Stopped" (err=<nil>)
	I1009 18:57:06.002917  341643 status.go:384] host is not running, skipping remaining checks
	I1009 18:57:06.002924  341643 status.go:176] ha-305771 status: &{Name:ha-305771 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:57:06.002962  341643 status.go:174] checking status of ha-305771-m02 ...
	I1009 18:57:06.003275  341643 cli_runner.go:164] Run: docker container inspect ha-305771-m02 --format={{.State.Status}}
	I1009 18:57:06.030920  341643 status.go:371] ha-305771-m02 host status = "Stopped" (err=<nil>)
	I1009 18:57:06.030941  341643 status.go:384] host is not running, skipping remaining checks
	I1009 18:57:06.030948  341643 status.go:176] ha-305771-m02 status: &{Name:ha-305771-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:57:06.030968  341643 status.go:174] checking status of ha-305771-m04 ...
	I1009 18:57:06.031273  341643 cli_runner.go:164] Run: docker container inspect ha-305771-m04 --format={{.State.Status}}
	I1009 18:57:06.051606  341643 status.go:371] ha-305771-m04 host status = "Stopped" (err=<nil>)
	I1009 18:57:06.051634  341643 status.go:384] host is not running, skipping remaining checks
	I1009 18:57:06.051641  341643 status.go:176] ha-305771-m04 status: &{Name:ha-305771-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (160.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 18:57:37.155631  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:58:04.855986  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m39.832107372s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (160.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 node add --control-plane --alsologtostderr -v 5: (1m20.613350152s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-305771 status --alsologtostderr -v 5: (1.085940274s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.136206607s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-732643 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1009 19:02:37.154667  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-732643 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.487607766s)
--- PASS: TestJSONOutput/start/Command (81.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-732643 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-732643 --output=json --user=testUser: (5.768749165s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-239960 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-239960 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.653883ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca992be9-ab34-4b71-b200-413c0a939e14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-239960] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"46d20110-a73b-493b-bafe-075babbf8239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"09166281-a84c-43f2-87c1-986a2c35b40f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"626fd89d-8d94-4d73-a849-bcf0eb55f9ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig"}}
	{"specversion":"1.0","id":"7b9d0241-04a5-4f56-844a-f3b6c22a2721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube"}}
	{"specversion":"1.0","id":"b06a337d-3271-449b-86ec-95facbb8d8c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6b67236c-2497-4462-a917-6be7274b25f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5e8c7074-a9b4-4553-8077-590a53ac8293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-239960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-239960
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (70.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-105200 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-105200 --network=: (1m8.416344036s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-105200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-105200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-105200: (2.135118003s)
--- PASS: TestKicCustomNetwork/create_custom_network (70.58s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-507979 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-507979 --network=bridge: (34.607169238s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-507979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-507979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-507979: (2.010173488s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.64s)

                                                
                                    
x
+
TestKicExistingNetwork (36.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 19:04:43.106222  286309 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 19:04:43.122952  286309 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 19:04:43.123035  286309 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 19:04:43.123051  286309 cli_runner.go:164] Run: docker network inspect existing-network
W1009 19:04:43.139269  286309 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 19:04:43.139306  286309 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 19:04:43.139321  286309 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 19:04:43.139441  286309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:04:43.155962  286309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6253c37b671b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:3c:b6:47:0c:22} reservation:<nil>}
I1009 19:04:43.156276  286309 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dc3c0}
I1009 19:04:43.156301  286309 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1009 19:04:43.156354  286309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 19:04:43.212026  286309 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-048293 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-048293 --network=existing-network: (34.553575625s)
helpers_test.go:175: Cleaning up "existing-network-048293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-048293
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-048293: (2.052176613s)
I1009 19:05:19.834268  286309 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.74s)

                                                
                                    
x
+
TestKicCustomSubnet (38.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-055265 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-055265 --subnet=192.168.60.0/24: (36.254774259s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-055265 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-055265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-055265
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-055265: (2.178977597s)
--- PASS: TestKicCustomSubnet (38.46s)

                                                
                                    
x
+
TestKicStaticIP (36.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-222004 --static-ip=192.168.200.200
E1009 19:06:14.047573  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-222004 --static-ip=192.168.200.200: (34.514433756s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-222004 ip
helpers_test.go:175: Cleaning up "static-ip-222004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-222004
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-222004: (2.065408972s)
--- PASS: TestKicStaticIP (36.74s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-627567 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-627567 --driver=docker  --container-runtime=crio: (33.331993327s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-630168 --driver=docker  --container-runtime=crio
E1009 19:07:37.154825  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-630168 --driver=docker  --container-runtime=crio: (34.854916231s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-627567
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-630168
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-630168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-630168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-630168: (2.118088528s)
helpers_test.go:175: Cleaning up "first-627567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-627567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-627567: (1.928831844s)
--- PASS: TestMinikubeProfile (73.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-180003 --memory=3072 --mount-string /tmp/TestMountStartserial4176413842/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-180003 --memory=3072 --mount-string /tmp/TestMountStartserial4176413842/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.715601188s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-180003 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-182276 --memory=3072 --mount-string /tmp/TestMountStartserial4176413842/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-182276 --memory=3072 --mount-string /tmp/TestMountStartserial4176413842/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.390880444s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-182276 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-180003 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-180003 --alsologtostderr -v=5: (1.62679478s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-182276 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-182276
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-182276: (1.228738603s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-182276
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-182276: (7.114905994s)
--- PASS: TestMountStart/serial/RestartStopped (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-182276 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-713826 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:09:00.218309  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:17.123045  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-713826 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.580410038s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-713826 -- rollout status deployment/busybox: (3.224915742s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-dkz7x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-l9jhg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-dkz7x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-l9jhg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-dkz7x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-l9jhg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-dkz7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-dkz7x -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-l9jhg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-713826 -- exec busybox-7b57f96db7-l9jhg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-713826 -v=5 --alsologtostderr
E1009 19:11:14.045076  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-713826 -v=5 --alsologtostderr: (56.873154853s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-713826 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp testdata/cp-test.txt multinode-713826:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2186366089/001/cp-test_multinode-713826.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826:/home/docker/cp-test.txt multinode-713826-m02:/home/docker/cp-test_multinode-713826_multinode-713826-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m02 "sudo cat /home/docker/cp-test_multinode-713826_multinode-713826-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826:/home/docker/cp-test.txt multinode-713826-m03:/home/docker/cp-test_multinode-713826_multinode-713826-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m03 "sudo cat /home/docker/cp-test_multinode-713826_multinode-713826-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp testdata/cp-test.txt multinode-713826-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2186366089/001/cp-test_multinode-713826-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826-m02:/home/docker/cp-test.txt multinode-713826:/home/docker/cp-test_multinode-713826-m02_multinode-713826.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826 "sudo cat /home/docker/cp-test_multinode-713826-m02_multinode-713826.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826-m02:/home/docker/cp-test.txt multinode-713826-m03:/home/docker/cp-test_multinode-713826-m02_multinode-713826-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m03 "sudo cat /home/docker/cp-test_multinode-713826-m02_multinode-713826-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp testdata/cp-test.txt multinode-713826-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2186366089/001/cp-test_multinode-713826-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826-m03:/home/docker/cp-test.txt multinode-713826:/home/docker/cp-test_multinode-713826-m03_multinode-713826.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826 "sudo cat /home/docker/cp-test_multinode-713826-m03_multinode-713826.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 cp multinode-713826-m03:/home/docker/cp-test.txt multinode-713826-m02:/home/docker/cp-test_multinode-713826-m03_multinode-713826-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 ssh -n multinode-713826-m02 "sudo cat /home/docker/cp-test_multinode-713826-m03_multinode-713826-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-713826 node stop m03: (1.211775704s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-713826 status: exit status 7 (569.31342ms)

                                                
                                                
-- stdout --
	multinode-713826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-713826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-713826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr: exit status 7 (574.877559ms)

                                                
                                                
-- stdout --
	multinode-713826
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-713826-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-713826-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:11:54.780583  392053 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:11:54.780719  392053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:54.780731  392053 out.go:374] Setting ErrFile to fd 2...
	I1009 19:11:54.780737  392053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:54.780999  392053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:11:54.781192  392053 out.go:368] Setting JSON to false
	I1009 19:11:54.781227  392053 mustload.go:65] Loading cluster: multinode-713826
	I1009 19:11:54.781645  392053 notify.go:220] Checking for updates...
	I1009 19:11:54.782320  392053 config.go:182] Loaded profile config "multinode-713826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:54.782362  392053 status.go:174] checking status of multinode-713826 ...
	I1009 19:11:54.783053  392053 cli_runner.go:164] Run: docker container inspect multinode-713826 --format={{.State.Status}}
	I1009 19:11:54.805534  392053 status.go:371] multinode-713826 host status = "Running" (err=<nil>)
	I1009 19:11:54.805558  392053 host.go:66] Checking if "multinode-713826" exists ...
	I1009 19:11:54.806749  392053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-713826
	I1009 19:11:54.826814  392053 host.go:66] Checking if "multinode-713826" exists ...
	I1009 19:11:54.827132  392053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:11:54.827184  392053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-713826
	I1009 19:11:54.851887  392053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/multinode-713826/id_rsa Username:docker}
	I1009 19:11:54.967612  392053 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:54.974204  392053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:11:54.987116  392053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:11:55.059364  392053 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:11:55.049397328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:11:55.059978  392053 kubeconfig.go:125] found "multinode-713826" server: "https://192.168.67.2:8443"
	I1009 19:11:55.060003  392053 api_server.go:166] Checking apiserver status ...
	I1009 19:11:55.060047  392053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:11:55.072711  392053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1009 19:11:55.082314  392053 api_server.go:182] apiserver freezer: "12:freezer:/docker/e12289d39525f292a034691d199b03573e870274bbe42d9c09ed1174f29b499d/crio/crio-4cc8d2847e2c4733cdb494a7a938f440e74f41149398ff49b22b488d520b198d"
	I1009 19:11:55.082391  392053 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e12289d39525f292a034691d199b03573e870274bbe42d9c09ed1174f29b499d/crio/crio-4cc8d2847e2c4733cdb494a7a938f440e74f41149398ff49b22b488d520b198d/freezer.state
	I1009 19:11:55.090696  392053 api_server.go:204] freezer state: "THAWED"
	I1009 19:11:55.090727  392053 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1009 19:11:55.099660  392053 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1009 19:11:55.099702  392053 status.go:463] multinode-713826 apiserver status = Running (err=<nil>)
	I1009 19:11:55.099736  392053 status.go:176] multinode-713826 status: &{Name:multinode-713826 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:11:55.099763  392053 status.go:174] checking status of multinode-713826-m02 ...
	I1009 19:11:55.100114  392053 cli_runner.go:164] Run: docker container inspect multinode-713826-m02 --format={{.State.Status}}
	I1009 19:11:55.118650  392053 status.go:371] multinode-713826-m02 host status = "Running" (err=<nil>)
	I1009 19:11:55.118679  392053 host.go:66] Checking if "multinode-713826-m02" exists ...
	I1009 19:11:55.119007  392053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-713826-m02
	I1009 19:11:55.136873  392053 host.go:66] Checking if "multinode-713826-m02" exists ...
	I1009 19:11:55.137239  392053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:11:55.137288  392053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-713826-m02
	I1009 19:11:55.155927  392053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21139-284447/.minikube/machines/multinode-713826-m02/id_rsa Username:docker}
	I1009 19:11:55.260245  392053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:11:55.274234  392053 status.go:176] multinode-713826-m02 status: &{Name:multinode-713826-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:11:55.274283  392053 status.go:174] checking status of multinode-713826-m03 ...
	I1009 19:11:55.274589  392053 cli_runner.go:164] Run: docker container inspect multinode-713826-m03 --format={{.State.Status}}
	I1009 19:11:55.296668  392053 status.go:371] multinode-713826-m03 host status = "Stopped" (err=<nil>)
	I1009 19:11:55.296691  392053 status.go:384] host is not running, skipping remaining checks
	I1009 19:11:55.296699  392053 status.go:176] multinode-713826-m03 status: &{Name:multinode-713826-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-713826 node start m03 -v=5 --alsologtostderr: (7.833005065s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-713826
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-713826
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-713826: (24.749706143s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-713826 --wait=true -v=5 --alsologtostderr
E1009 19:12:37.154619  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-713826 --wait=true -v=5 --alsologtostderr: (50.944442898s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-713826
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-713826 node delete m03: (4.875363537s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-713826 stop: (23.57297527s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-713826 status: exit status 7 (97.905087ms)

                                                
                                                
-- stdout --
	multinode-713826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-713826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr: exit status 7 (96.985141ms)

                                                
                                                
-- stdout --
	multinode-713826
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-713826-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:13:49.090797  399802 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:13:49.090930  399802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:13:49.090940  399802 out.go:374] Setting ErrFile to fd 2...
	I1009 19:13:49.090945  399802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:13:49.091202  399802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:13:49.091387  399802 out.go:368] Setting JSON to false
	I1009 19:13:49.091432  399802 mustload.go:65] Loading cluster: multinode-713826
	I1009 19:13:49.091507  399802 notify.go:220] Checking for updates...
	I1009 19:13:49.092439  399802 config.go:182] Loaded profile config "multinode-713826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:13:49.092466  399802 status.go:174] checking status of multinode-713826 ...
	I1009 19:13:49.092991  399802 cli_runner.go:164] Run: docker container inspect multinode-713826 --format={{.State.Status}}
	I1009 19:13:49.111367  399802 status.go:371] multinode-713826 host status = "Stopped" (err=<nil>)
	I1009 19:13:49.111392  399802 status.go:384] host is not running, skipping remaining checks
	I1009 19:13:49.111399  399802 status.go:176] multinode-713826 status: &{Name:multinode-713826 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:13:49.111436  399802 status.go:174] checking status of multinode-713826-m02 ...
	I1009 19:13:49.111741  399802 cli_runner.go:164] Run: docker container inspect multinode-713826-m02 --format={{.State.Status}}
	I1009 19:13:49.135755  399802 status.go:371] multinode-713826-m02 host status = "Stopped" (err=<nil>)
	I1009 19:13:49.135782  399802 status.go:384] host is not running, skipping remaining checks
	I1009 19:13:49.135789  399802 status.go:176] multinode-713826-m02 status: &{Name:multinode-713826-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-713826 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-713826 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.479320527s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-713826 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-713826
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-713826-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-713826-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.100012ms)

                                                
                                                
-- stdout --
	* [multinode-713826-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-713826-m02' is duplicated with machine name 'multinode-713826-m02' in profile 'multinode-713826'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-713826-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-713826-m03 --driver=docker  --container-runtime=crio: (34.900376852s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-713826
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-713826: exit status 80 (345.182505ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-713826 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-713826-m03 already exists in multinode-713826-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-713826-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-713826-m03: (1.962804454s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.36s)

                                                
                                    
x
+
TestPreload (153.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-237313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1009 19:16:14.045882  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-237313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.033479356s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-237313 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-237313 image pull gcr.io/k8s-minikube/busybox: (2.444698585s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-237313
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-237313: (5.759047557s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-237313 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1009 19:17:37.154956  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-237313 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m22.302203547s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-237313 image list
helpers_test.go:175: Cleaning up "test-preload-237313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-237313
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-237313: (2.35786281s)
--- PASS: TestPreload (153.14s)

                                                
                                    
x
+
TestScheduledStopUnix (107.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-891160 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-891160 --memory=3072 --driver=docker  --container-runtime=crio: (31.609383393s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891160 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-891160 -n scheduled-stop-891160
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891160 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:18:27.064319  286309 retry.go:31] will retry after 105.39µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.065492  286309 retry.go:31] will retry after 171.971µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.066627  286309 retry.go:31] will retry after 252.627µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.067759  286309 retry.go:31] will retry after 440.966µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.068896  286309 retry.go:31] will retry after 697.734µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.070028  286309 retry.go:31] will retry after 748.103µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.071149  286309 retry.go:31] will retry after 872.351µs: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.072265  286309 retry.go:31] will retry after 2.556841ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.075467  286309 retry.go:31] will retry after 2.67688ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.078740  286309 retry.go:31] will retry after 2.576212ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.081942  286309 retry.go:31] will retry after 6.787412ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.089173  286309 retry.go:31] will retry after 8.277239ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.099557  286309 retry.go:31] will retry after 18.58441ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.118845  286309 retry.go:31] will retry after 20.97482ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.140112  286309 retry.go:31] will retry after 23.671021ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
I1009 19:18:27.164540  286309 retry.go:31] will retry after 27.596423ms: open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/scheduled-stop-891160/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891160 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-891160 -n scheduled-stop-891160
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-891160
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891160 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-891160
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-891160: exit status 7 (76.249545ms)

                                                
                                                
-- stdout --
	scheduled-stop-891160
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-891160 -n scheduled-stop-891160
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-891160 -n scheduled-stop-891160: exit status 7 (73.148573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-891160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-891160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-891160: (4.599631619s)
--- PASS: TestScheduledStopUnix (107.88s)

                                                
                                    
x
+
TestInsufficientStorage (13.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-402794 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-402794 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.429998361s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1b0f7210-6b02-48ac-90fa-c433b7511eab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-402794] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d20fdef-24f6-4623-ad51-a98895e58408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"ebd9f7da-1f4e-433c-bf56-47c0cf66480b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"718a95d3-d331-4202-893c-9a879b807aa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig"}}
	{"specversion":"1.0","id":"9edf8d25-d6a1-4b4e-940b-94399e294757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube"}}
	{"specversion":"1.0","id":"a680d3e2-9591-401a-a4e8-dc025530196f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"60ff0631-5fe3-41e6-b7d9-c3b1710dd48e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"81f7c7c3-99d1-4c19-9688-84a10389f1f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"af7366c6-3637-4a6e-85bf-66e6f52ab431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f8f85372-5d3d-4130-a0ea-9715e089b4bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8bb33d7f-8feb-4a8e-a870-12140e5ceb76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c66d2ef1-64fd-44f7-8fb0-3fd3a32111bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-402794\" primary control-plane node in \"insufficient-storage-402794\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"564c1892-ff09-4178-8f30-6e5308739d08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"56f93b6c-869d-4643-ab72-37a319c00fef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"25bfadb3-f452-4652-98ad-32dbbbe681cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-402794 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-402794 --output=json --layout=cluster: exit status 7 (301.161721ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-402794","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-402794","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:19:54.500467  415959 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-402794" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-402794 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-402794 --output=json --layout=cluster: exit status 7 (295.339974ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-402794","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-402794","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:19:54.797573  416027 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-402794" does not appear in /home/jenkins/minikube-integration/21139-284447/kubeconfig
	E1009 19:19:54.807460  416027 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/insufficient-storage-402794/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-402794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-402794
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-402794: (1.913846205s)
--- PASS: TestInsufficientStorage (13.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2737653433 start -p running-upgrade-820547 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2737653433 start -p running-upgrade-820547 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.109077274s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-820547 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-820547 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.45604095s)
helpers_test.go:175: Cleaning up "running-upgrade-820547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-820547
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-820547: (2.469532914s)
--- PASS: TestRunningBinaryUpgrade (65.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (163.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.465519513s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-055159
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-055159: (1.262439115s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-055159 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-055159 status --format={{.Host}}: exit status 7 (92.746629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 19:22:37.155346  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m22.654311876s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-055159 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (112.02611ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055159] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-055159
	    minikube start -p kubernetes-upgrade-055159 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0551592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-055159 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-055159 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.635093403s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-055159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-055159
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-055159: (2.497727146s)
--- PASS: TestKubernetesUpgrade (163.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (129.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.565099662 start -p missing-upgrade-636288 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.565099662 start -p missing-upgrade-636288 --memory=3072 --driver=docker  --container-runtime=crio: (1m13.661639266s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-636288
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-636288
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-636288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-636288 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.655463048s)
helpers_test.go:175: Cleaning up "missing-upgrade-636288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-636288
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-636288: (2.301912606s)
--- PASS: TestMissingContainerUpgrade (129.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:116: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034324 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-034324 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (89.828935ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-034324] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034324 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034324 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.444295168s)
no_kubernetes_test.go:233: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-034324 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:145: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1009 19:21:14.049508  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:145: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.731792889s)
no_kubernetes_test.go:233: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-034324 status -o json
no_kubernetes_test.go:233: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-034324 status -o json: exit status 2 (336.92057ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-034324","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:157: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-034324
no_kubernetes_test.go:157: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-034324: (1.893199659s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034324 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.795957611s)
--- PASS: TestNoKubernetes/serial/Start (5.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-034324 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:180: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-034324 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.679975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:212: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-034324
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-034324: (1.218108597s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:224: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034324 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:224: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034324 --driver=docker  --container-runtime=crio: (8.492150411s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-034324 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:180: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-034324 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.357095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.888601848 start -p stopped-upgrade-702726 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.888601848 start -p stopped-upgrade-702726 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.073609264s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.888601848 -p stopped-upgrade-702726 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.888601848 -p stopped-upgrade-702726 stop: (2.016711394s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-702726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-702726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.991461095s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-702726
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-702726: (1.332081263s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestPause/serial/Start (84.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-446510 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-446510 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.887147933s)
--- PASS: TestPause/serial/Start (84.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-446510 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 19:25:57.124399  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:14.045758  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-446510 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.585221941s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-224541 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-224541 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (245.902606ms)

                                                
                                                
-- stdout --
	* [false-224541] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:26:30.014990  449850 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:26:30.015203  449850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:30.015229  449850 out.go:374] Setting ErrFile to fd 2...
	I1009 19:26:30.015249  449850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:30.015671  449850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-284447/.minikube/bin
	I1009 19:26:30.016267  449850 out.go:368] Setting JSON to false
	I1009 19:26:30.031788  449850 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7741,"bootTime":1760030249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1009 19:26:30.031945  449850 start.go:141] virtualization:  
	I1009 19:26:30.036169  449850 out.go:179] * [false-224541] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:26:30.040150  449850 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:26:30.040494  449850 notify.go:220] Checking for updates...
	I1009 19:26:30.061413  449850 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:26:30.064465  449850 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-284447/kubeconfig
	I1009 19:26:30.067610  449850 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-284447/.minikube
	I1009 19:26:30.070758  449850 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:26:30.073837  449850 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:26:30.077842  449850 config.go:182] Loaded profile config "force-systemd-flag-476949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:30.077983  449850 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:26:30.118381  449850 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:26:30.118524  449850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:30.183943  449850 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:26:30.174434684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:30.184059  449850 docker.go:318] overlay module found
	I1009 19:26:30.187500  449850 out.go:179] * Using the docker driver based on user configuration
	I1009 19:26:30.190333  449850 start.go:305] selected driver: docker
	I1009 19:26:30.190354  449850 start.go:925] validating driver "docker" against <nil>
	I1009 19:26:30.190369  449850 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:26:30.193889  449850 out.go:203] 
	W1009 19:26:30.196901  449850 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 19:26:30.199722  449850 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-224541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-224541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-224541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224541"

                                                
                                                
----------------------- debugLogs end: false-224541 [took: 3.253326262s] --------------------------------
helpers_test.go:175: Cleaning up "false-224541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-224541
--- PASS: TestNetworkPlugins/group/false (3.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1009 19:36:14.048142  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.647523521s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-271815 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [76114c03-98f6-4ea8-a226-c9d7b7a2cb8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [76114c03-98f6-4ea8-a226-c9d7b7a2cb8c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.002844418s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-271815 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-271815 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-271815 --alsologtostderr -v=3: (11.948045102s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.967697211s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815: exit status 7 (99.554989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-271815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (58.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1009 19:37:37.154734  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-271815 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.572025207s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-271815 -n old-k8s-version-271815
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (58.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h9ccf" [005602fb-94aa-46b2-94ef-5bb2d79d974f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003975577s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h9ccf" [005602fb-94aa-46b2-94ef-5bb2d79d974f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00383985s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-271815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-678119 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5cf9de21-70e1-4070-8c67-80a49ebe678c] Pending
helpers_test.go:352: "busybox" [5cf9de21-70e1-4070-8c67-80a49ebe678c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003410524s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-678119 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-271815 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.64177135s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-678119 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-678119 --alsologtostderr -v=3: (12.077035389s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119: exit status 7 (127.704022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-678119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (62.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-678119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.116097741s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-678119 -n no-preload-678119
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (62.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zf28" [113adb1f-c2d9-42a6-9d6a-8fce7939781b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003743377s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6zf28" [113adb1f-c2d9-42a6-9d6a-8fce7939781b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003794056s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-678119 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-779570 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [da851351-fb55-4c75-887c-1b549c0858fd] Pending
helpers_test.go:352: "busybox" [da851351-fb55-4c75-887c-1b549c0858fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [da851351-fb55-4c75-887c-1b549c0858fd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.009171361s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-779570 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-678119 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-779570 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-779570 --alsologtostderr -v=3: (11.926247594s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.330940252s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570: exit status 7 (70.695525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-779570 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (63.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1009 19:41:14.046042  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-779570 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.415495214s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-779570 -n embed-certs-779570
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (63.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dm67w" [2d517ad9-c456-40c1-ae85-a137f48a5f5e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00380303s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dm67w" [2d517ad9-c456-40c1-ae85-a137f48a5f5e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003231188s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-779570 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-779570 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [502e5162-8647-4d5c-8bb6-483efa4658f3] Pending
helpers_test.go:352: "busybox" [502e5162-8647-4d5c-8bb6-483efa4658f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [502e5162-8647-4d5c-8bb6-483efa4658f3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003374579s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1009 19:41:47.863859  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:47.870247  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:47.881600  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:47.903009  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:47.947254  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:48.029191  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:48.190646  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:48.513918  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:49.156528  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:41:50.438507  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.347782396s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-661639 --alsologtostderr -v=3
E1009 19:41:58.122655  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-661639 --alsologtostderr -v=3: (11.880526731s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639: exit status 7 (82.532984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-661639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1009 19:42:08.364620  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:42:20.223436  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:42:28.846937  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-661639 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.742799674s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-661639 -n default-k8s-diff-port-661639
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-532612 --alsologtostderr -v=3
E1009 19:42:37.126188  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:42:37.154569  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-532612 --alsologtostderr -v=3: (12.258557254s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612: exit status 7 (74.553572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-532612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-532612 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.74818688s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-532612 -n newest-cni-532612
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-532612 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zdn2m" [67373ad7-62e9-47b6-a07e-5846f3a33dae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005023244s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zdn2m" [67373ad7-62e9-47b6-a07e-5846f3a33dae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0041573s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-661639 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.661449501s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-661639 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1009 19:43:27.273397  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:43:32.395156  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:43:42.637165  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:44:03.118562  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:44:31.729892  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/old-k8s-version-271815/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m26.41085029s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-224541 "pgrep -a kubelet"
I1009 19:44:37.984219  286309 config.go:182] Loaded profile config "auto-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hswr2" [4f929010-e94a-4e9c-90a4-c338fa22000f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hswr2" [4f929010-e94a-4e9c-90a4-c338fa22000f] Running
E1009 19:44:44.080125  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003253835s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-pmbbc" [4348839e-7cf6-4a70-82a3-c5003b6aabcc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003704533s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-224541 "pgrep -a kubelet"
I1009 19:44:57.640650  286309 config.go:182] Loaded profile config "kindnet-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kc4s8" [b35ead16-4e77-4437-9f5f-928dd78a8be1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kc4s8" [b35ead16-4e77-4437-9f5f-928dd78a8be1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003649786s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.692996779s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1009 19:46:06.002355  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:14.046049  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/addons-419518/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.557849038s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-25tgh" [20c05862-2d51-461f-94b0-c3b6acafa666] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003702884s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-224541 "pgrep -a kubelet"
I1009 19:46:34.282485  286309 config.go:182] Loaded profile config "calico-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kqgpd" [075b969e-6d3a-43c1-a101-f620d15f4d41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kqgpd" [075b969e-6d3a-43c1-a101-f620d15f4d41] Running
E1009 19:46:43.338827  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:43.345148  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:43.356578  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:43.378062  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003821858s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-224541 "pgrep -a kubelet"
I1009 19:46:38.051301  286309 config.go:182] Loaded profile config "custom-flannel-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7gxcg" [0cac4e27-83b1-4a1b-ba6f-c6304f849df2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7gxcg" [0cac4e27-83b1-4a1b-ba6f-c6304f849df2] Running
E1009 19:46:43.419849  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:43.501344  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:43.662737  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:43.984445  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:46:44.626587  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003301571s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1009 19:46:45.907977  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m24.323099202s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1009 19:47:24.315058  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:47:37.154337  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/functional-141121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:05.276824  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/default-k8s-diff-port-661639/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:22.142432  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/no-preload-678119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.045758725s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rm65m" [64ed2fe1-e14e-4d3c-8820-cc8fdd762deb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003175923s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-224541 "pgrep -a kubelet"
I1009 19:48:34.157168  286309 config.go:182] Loaded profile config "flannel-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tf8sl" [93b07544-eb9a-4323-866e-b9361c338b6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tf8sl" [93b07544-eb9a-4323-866e-b9361c338b6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005672167s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-224541 "pgrep -a kubelet"
I1009 19:48:36.755905  286309 config.go:182] Loaded profile config "enable-default-cni-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s9wpb" [434ad784-2c24-4a27-bd4a-44f6df51edc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s9wpb" [434ad784-2c24-4a27-bd4a-44f6df51edc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004153475s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-224541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.621153441s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-224541 "pgrep -a kubelet"
I1009 19:50:25.458275  286309 config.go:182] Loaded profile config "bridge-224541": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-224541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ngm2c" [88aa08dc-4b87-47f2-9309-c59b26c742a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ngm2c" [88aa08dc-4b87-47f2-9309-c59b26c742a2] Running
E1009 19:50:32.191405  286309 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-284447/.minikube/profiles/kindnet-224541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004084411s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-224541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-224541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-187653 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-187653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-187653
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-557073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-557073
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-224541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-224541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-224541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224541"

                                                
                                                
----------------------- debugLogs end: kubenet-224541 [took: 3.364208485s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-224541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-224541
--- SKIP: TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-224541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-224541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-224541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-224541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224541"

                                                
                                                
----------------------- debugLogs end: cilium-224541 [took: 3.726786987s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-224541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-224541
--- SKIP: TestNetworkPlugins/group/cilium (3.89s)

                                                
                                    
Copied to clipboard